Conference PaperPDF Available

“All celebrities and sports on top”: Prototyping automation for and with editors

Authors:
“All celebrities and sports on
top”
Prototyping automation for and with editors
Sverre Norberg-Schulz Hagen
Schibsted, Oslo, Norway, sverre.norberg-
schulz.hagen@schibsted.com
Guri B. Verne
University of Oslo, Oslo, Norway, guribv@ifi.uio.no
Tone Bratteteig
University of Oslo, Oslo, Norway, tone@ifi.uio.no
ABSTRACT
Designing for interacting with data-driven approaches is a new
challenge that PD will have to address. This paper presents a case
of prototyping for automation of editors’ manual curating of the
online front-page of a large newspaper. The editors make decisions
about the presentation and placement of article teasers on the front-
page. A new data-driven tool, which automates curating the front-
page based on quantitative rankings, is about to be introduced. We
have developed a prototype to discuss with the editors how they
want support for carrying out their judgment-based decisions for a
front-page with a good mix of news topics. We present concepts for
discussing how manual tasks that interact with data-driven
automation can be designed to be meaningful for people in their
work.
Author Keywords
Participatory design, automation of work, Machine Learning, data-
driven approach, prototyping, user experience.
CSS Concepts
• Human-centered computing; Interaction design; Participatory
design; Empirical studies in interaction design; Mathematics of
computing Probability and statistics; Probabilistic inference
problems; Probabilistic reasoning algorithms.
1 Introduction
The emergence of artificial intelligence and machine learning
has made the discussion about automation of work relevant again.
Today, also white-collar work is possible to automate by data-
driven approaches in IT. Data about users and customers are
collected and used to tailor information and services. Many people
worry about the surveillance society and how to address and
counteract the unwanted effects of data-driven systems. In this
context, it is interesting to discuss the “nuts and bolts” of data-
driven approaches: what is considered as data, what does the data
represent, and how is the representation (i.e., the data) used in
automating of a service or process? A quote from Zuboff’s 1988
book illustrate that these questions are not new:
As information technology is used to reproduce, extend, and
improve upon the process of substituting machines for human
agency, it simultaneously accomplishes something quite different.
The devices that automate by translating information into action
also register data about those automated activities, thus
generating new streams of information. … The same systems that
make it possible to automate office transactions also create a vast
overview of an organization’s operations, with many levels of data
coordinated and accessible for a variety of analytical efforts
[63:9].
Automation of work has been important in the Participatory
Design (PD) community from the start. The origin of PD
addressed automation of blue-collar work by computers and
provided alternative views on technology and automation [44, 54]
and suggestions for alternative technology designs (e.g., UTOPIA
[20-22], Florence [5-7]). As work is still important for most
people, we suggest that PD researchers address the challenges that
design of the new data-driven technologies pose to current work
and workers. We see this as a “big issue” for PD [17] both because
of its assumed potential for automating a large number of jobs and
because of the public debate about data-driven technologies. This
is the theme of this paper.
Data-driven approaches have been discussed in PD and
neighboring research fields with the aim to understand how people
use such systems (e.g., [23, 26, 42, 47, 60, 62]) as well as how
data scientists create such systems [18, 42]. When looking at the
practices of data scientists, Muller et al. [42] confirm that data are
constructed: they are wrangled, weeded away, and even invented
when data is missing (see also [14, 24]).
For PD it may be just as interesting to look at how future users
of a data-driven technology can be involved in design of their
future. Muller and Liao [43] suggest to use design fiction as an
approach to discussing the experience of using such a system. In
line with Holmqvist [31], Bratteteig and Verne [12] discuss how
the behavior of systems based on Machine Learning (ML) is
almost impossible to predict and hence difficult to design with the
PD ambition to maintain users’ control of the system. However,
they argue that traditional PD tools and techniques can be used to
discuss and design the human (manual) parts of Artificial
Intelligence (AI) or ML based systems.
In this paper, we will give an example of how it is possible to
use a prototyping approach in a PD process to discuss with future
users how to work with a data-driven system.
As a basis for our discussion in this paper, we build on
technical knowledge about AI, ML and data-driven technologies
[50, 51, 61] as well as on conceptualizations of automation [46].
ML in the form of algorithms and data-driven approaches are used
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from Permissions@acm.org.
PDC '20: Vol. 1, June 1520, 2020, Manizales, Colombia
© 2020 Copyright is held by the owner/author(s).
Publication rights licensed to ACM.
ACM ISBN 978-1-4503-7700-3/20/06…$15.00
https://doi.org/10.1145/3385010.3385025
to automate a practical or cognitive task and thereby remove this
task from people’s manual work. When routine tasks are
automated, exceptions and deviations from the routines are often
left to humans since complicated non-routine tasks require the
experience and knowledge of a human operator. Bainbridge [3]
argues that humans without experience from routine work may not
be able to handle the non-routine cases, what she calls “the ironies
of automation”. Verne and Bratteteig [59] show that “full
automation” often does not include all tasks; there are often some
tasks left for people to carry out. These “residual” tasks may seem
fragmented and not very logical since they are based on what is
left after automation rather than the logic of the task chain.
A note on concepts before we continue. ML has emerged as a
field within AI, and is now the dominating method for developing
software for applications that previously were out of reach for
traditional programming. ML is an approach to finding patterns in
large data sets by using statistical methods. These patterns can be
used for recognition or predictions of various sorts [14, 34].
However, as the training of a ML application is often based on a
limited data set, the results are expected to improve over time as
more data will give a better basis for the pattern recognitions. The
patterns that emerge over time based on real-life data may turn out
to be different from the patterns based on training data (e.g., [48]).
The fact that ML integrates new data continuously introduces an
element of unpredictability to its reasoning: it is not possible to
know in advance how a data-driven ML will behave. In this paper,
we are not concerned with the mathematics and the different
approaches to ML; hence we will use the notions of ML and data-
driven approaches interchangeably, without concern for the
precise ML paradigms, algorithms or statistics. Our concern here
is to design for users that will interact with a tool based on ML.
In this paper, we discuss how we can design the manual tasks
that are carried out by people when data-driven automation is
integrated in the work. We start with a brief introduction to our PD
approach before we describe the case. Furthermore, we explain
and discuss the dynamic relation between automation and manual
tasks for people to carry out. The final sections discuss the effects
of data-driven approaches on the individual user’s work and on
society, and how in some types of work the social responsibility is
integrated in the professional competence and judgment in the
work.
2 A PD approach to automation
PD is concerned with the users’ experiences of meaning and
quality of their work, which is not always maintained by
automation. In this paper we want to explore if and how PD can
provide a way to walk the line between automation and manual
work, to discuss which parts of the work should be automated and
not. We are particularly interested in automation of work which
has repercussions outside the content of the work itself, where a
larger audience also will be influenced by the results of the
automation, for example in public services or media (e.g.,
[59,64]). In this paper, we focus on automation where ML is used
to automate cognitive work with a particular attention to the
professional judgments exercised in that work.
2.1 Experiencing future automation
AI and ML is increasingly becoming a topic for PD. Bratteteig
and Verne [12] discuss how mundane, everyday experiences with
algorithmic and ML-based applications such as, e.g., text
autocorrect, recommendation systems [34] or chatbots [30,40] can
help us better understand how such technologies function in use.
In line with [43], they argue that design of applications which
include AI can be explored through “design fiction” [19].
Moreover, they argue that prototyping of such systems is not
possible because of their unpredictability as these systems change
their behavior as the data set grows through use, and a linear
improvement as the data set grows is not guaranteed [1].
In this paper, we present a case where automation based on a data-
driven approach using ML is explored together with the future
users by means of a prototype. In PD, prototyping is often used for
(mutual) learning about technical possibilities and as a way to
engage users in co-construction [10, 11]. Prototyping enables
future users to be included in making design decisions and also to
experience the effects of those decisions [13, 32] in the same way
as designers use sketches and prototypes in design [28, 29, 38,
53]. Making a prototype even a simple mock-up [4] creates a
basis for imagining how the users’ activities and context may
change when the new technology is introduced.
Many PD projects have emphasized prototyping as particularly
important for equal collaboration in design when users are not
experts in design or the technology [7, 11, 13, 20, 21, 22, 33, 35,
39, 49]. Experiencing concrete representations of future systems is
beneficial for both users and designers as “conversations with the
design material” [53] where the effects of a design decision is
experienced. “Prototypes embody design hypotheses and enable
designers to test them.” [29: 299]. Designers learn from
concretization in their work: “In engineering, enlightened trial and
error, not the planning of flawless intellects, has brought most
advances” [29: 91]. Future users also benefit from trying out
prototypes that concretize design possibilities and exemplify how
a particular design: a particular way of automating a task, can be
represented when it is still easy to change or undo it.
As the results of data-driven systems will change over time, it can
be difficult to prototype the behavior of this kind of data-driven
automatic systems [12, 18, 31]. However, our case from a
newspaper illustrates how a prototype of an automatic system can
be used for exploring the preferred limits of automation together
with the future users [27]. We discuss the role of prototyping in
designing manual work that depends on dynamic decisions made
automatically by the algorithms.
2.2 A work-oriented approach to automation
It is possible to start automation of work tasks by focusing on
what is considered important in work rather than what is possible
to automate. We build on a conceptualization of automation by
Verne and Bratteteig [59]. They start out by assuming that
automation removes some tasks from somebody’s work, and
leaves other tasks to be carried out manually. The manual tasks
not included in the automation are “residual” tasks (see Figure 1,
upper right corner). Verne and Bratteteig argue that the “residual”
tasks left after automation are more fragmented and incoherent for
the users than the original set of tasks and introduce new
challenges for the people that the automation was designed to
support. Automation often results in new tasks, very often
concerned with administration and interpretation of the
automation. Different ways of automating work may leave
different “residual” tasks to be carried out manually (see Figure 1,
lower left corner).
Instead of automation based on what automation can do, Verne
and Bratteteig [59] suggest to design the automation as a support
for the work that the humans carry out. This approach emphasizes
to design support for the manual tasks that the humans want to
keep, which is different from letting the manual tasks emerge as
residue from the automation (see Figure 1, lower right corner).
Figure 1. Illustration of automation of a subject matter, based
on [59]. Automation removes tasks from the user (top),
different automations remove different tasks (bottom left), but
a coherent set of manual tasks is possible (bottom right).
Because of its look, Verne and Bratteteig [59] call this
approach for “the wig”. However, they do not explain how to
design a coherent set of tasks that will constitute a “wig” for
someone. The prototyping in our case aimed to demonstrate how
this could be done by designing automation tools to support
human work.
3 A case of automation with Machine
Learning
Automatic publishing of online news is currently being
introduced in many newspapers. Large newspapers have editors
that check the results from the automatic publishing, but the
smaller newspapers cannot always afford that. In addition, some
news articles are automatically generated from numeric data, for
example a description of a football match generated from data
about goals and players.
Our case is from the newsroom of a large national newspaper
in Norway. The newspaper is part of a large media house where a
tool for automated, data-driven front-page curation is being
developed in-house. The tool has been taken into use in another
subscription newspaper in the media house, but not yet in our
newspaper, where it is about to be introduced. In a newspaper
relying mostly on advertisement revenue, a good online front-page
that attracts readers and generates clicks is very important for their
sales (and more so than for their fellow subscription newspaper
trying out the system). The PD project was not an official part of
the newspaper systems design process but played a role by giving
input to the design process about preferences and needs in front-
page editors’ work.
3.1 The PD process for designing automation
The PD process that is part of this study was carried out by the
first author, who has three years of experience as a front-page
editor in the newspaper. During this PD project he did not work as
a front-page editor. The PD process started with a future workshop
[36] about the front-page, facilitated by the first author. Two
journalists, one front-page editor and one video producer
participated. After the workshop, he carried out field work in the
newsroom observing front-page editors at work, taking notes on
paper, and interviewed one of them in-depth. To avoid confusion
about his role as a researcher and his previous role as a front-page
editor, the observations and interviews with other front-page
editors were carried out outside of their work hours. The data from
the workshop and the fieldwork were analyzed as to identify what
front-page editors consider important as well as problematic in
their work. This analysis constituted the basis for selecting
functionality for the prototype.
In the spring of 2018, the first author was invited to attend a
presentation from the in-house ML-team of the new algorithm-
driven tool for front-end creation, called Curate. Curate was under
development and a first, rudimentary version of this tool was
about to be taken into trial use in another newspaper in the media
house. In this presentation, he learned about the technical
functionality of the tool, and how the algorithms dynamically
calculated front-page placements of articles based on data set by
journalists, as well as data about reader behaviour. He learned
about functionality that the tool was planned to contain when
finished. At this point, the graphical interface for interacting with
the Curate tool was lacking.
As a follow-up of this presentation, he made an interview with
a front-page editor, who had experience from using the tool in the
subscription newspaper. This interview was taped and transcribed.
On this basis, the first author developed prototypes to illustrate
different ways for interacting with this tool. The prototypes were
used in a workshop with two front-page editors, exploring how the
prototypes supported how they wanted to work with the front-
page.
The prototypes were of two kinds. The main prototype
consisted of a large set of wireframes printed on paper, where
some of the main ones were made interactive in order to illustrate
manipulation of real news. A simple script was written to populate
the interactive prototype with real data from the news database.
The editors could interact with this prototype getting the
experience of how working with an automatically generated front-
page would be. After the presentation of the prototypes, the front-
page editors were given time to discuss freely, while the first
author took handwritten notes.
3.2 The work of a front-page editor
The newspaper delivers all sorts of news on a daily basis:
general news, investigative journalism, feature, politics, culture,
and sports. The journalists are organized in teams according to
these sections, and they all fight for visibility on the front-page.
automation
residual tasks
different
automation
less
automation
extra tasks
what the
user sees
automation
from a work
perspective
the overall activity
residual tasks
what the
user sees
Three front-page editors are responsible for manually curating the
front-page of the online version of the newspaper (Figure 2).
Figure 2. The three desks of the front-page editors in the
newsroom.
A range of different tools are used in the newspaper, by front-
page editors and journalists. The journalists use a Content
Management System (CMS) called Create for writing and editing
articles. The CMS is a text editor where metadata tags, such as the
status of the article draft, news value, news lifetime (i.e., how fast
the news value of the article should drop), category, and byline
can be added, see Figure 3.
The editors use the front-page editing tool Dr. Front for
curation of the front-page. It offers a preview of the front-page,
where article teasers can be re-arranged and resized, and the actual
content of the teasers (i.e., images and headlines) can be altered
directly. The article teasers appear in the system as feeds from the
different news sections (e.g., sport, celebrities, feature), hence the
Dr. Front tool handles a number of such sub-feeds that are curated
into the main feed appearing on the front-page. Dr. Front is based
on a WYSIWYG approach, providing the editors with an
overview of the front-page that is visually identical to what is
presented to the readers, see Figure 4.
Figure 3. The journalists’ writing tool Create.
A second kind of tools used by the editors is communication and
monitoring tools. The communication tools are used to
communicate with journalists in the different sections about the
articles they deliver. For this they use email and phone, and also
Slack (instant messaging) and Trello (a digital kanban board). The
many channels require continuous and simultaneous attention and
can create stress.
The monitoring tools are used to monitor reader engagement: a
real-time dashboard displays metrics such as the click-through rate
(the percentage of front-page visitors actually clicking on an
article teaser), the total number of page views per article, and the
average time spent reading each article. The front-page editors
interpret and curate the front-page using data from these systems.
For instance, an article that has a high number of views per minute
but short average reading time may indicate that the teaser
promises more than the article delivers (i.e., “clickbait”) and
should be changed.
Figure 4. The manual front-page editing tool Dr. Front.
Front-page editors curate content. They understand their
curation work as to select, organize, and present news: They select
what to include on the front-page, they organize the order and
layout of teasers, and they present content in an enticing way.
The investigations of front-page editors’ work made it clear
what they consider important in their curation:
Creating balance: creating “the mix” that the newspaper
wants to present on the front-page. The mix is important
for making the newspaper relevant when events happen
in the world. The front-page editors want to be able to
control and adjust the front-page content according to
the situation, e.g., “on election night, I would want [the
main part of the front-page] to consist of mainly political
news articles, with perhaps some celebrity stories to
‘break it up’. On a Sunday morning, however, I would
want it to include more ‘easy reading’ content” (Front-
page editor in evaluation workshop).
Considering proximity to other content, in particular
avoiding unfortunate combinations of teasers. One editor
gave an example of an “’embarrassing combination’,
such as putting a story about weight loss and a story
about eating disorders next to each other” (Front-page
editor in evaluation workshop).
Identifying potential: identify and change teasers that are
underperforming by evaluating the whole front-page up
against what is going on in the world right now. The
front-page editors want to be able to curate the front-
page: if they consider an article to have a potential for
higher reader interest than the reader data shows, they
may edit the teaser hoping to reinvigorate the article.
In order for the front-page editors to curate the frontpage, they
need to
communicate about the articles with the journalist (who
wrote the article) and sometimes with their front-page
editor colleagues both before and after they publish
them.
be aware of what their colleagues publish. They use the
front-page as a shared view of what is going on, as a log
of colleagues’ work. The editors know what is going on
in the news room and can prepare the front-page for
what they expect is coming.
have access control to avoid interference from
colleagues while they work on editing parts of the front-
page (this is supported in the current manual tool).
3.3 The new data-driven system: Curate
The new algorithm-based and data-driven curation tool about
to be introduced creates a “feed” of news that will automatically
be positioned on the front-page. To do this, the system combines
data about reader behavior with parameters set by the journalists,
and calculates a “ranking” that gives an article a position on the
front-page according to its ranking. The journalists set two
parameters: news value and news lifetime for their articles, and the
system collects data about reader clicks per minute and reader
time for this type of news to calculate the ranking. However,
editors can give these metrics a weighted score, like deciding that
clicks per minute should only account for 10 percent of how the
algorithm ranks articles.
Articles from the different sections of the newspaper (e.g.,
sport, celebrities) appear as different sub-feeds to the main feed
into the Curate system. Curate fetches articles created by the
journalists from these sub-feeds as well as articles with a high
average reading time fetched from special metrics sub-feeds.
In the first version of Curate, which was taken into use in the
subscription newspaper in the media house, teasers for the front-
page were created automatically. The system fetched the images
and headlines provided by journalists through the CMS for
automatic publication.
Curate can also be used for creating personalized front-pages,
e.g., so that a reader will not be presented with articles s/he has
already read or only be presented with articles that the algorithms
calculate as relevant based on the reader’s previous clicking and
reading behavior.
Curate includes some openings for the editors to overrule the
rankings from the algorithm. The top of the front-page, i.e., the 1-
3 articles on the top, is still controlled by the front-page editors
manually. This enables the newsroom to control that the news they
consider most important is displayed as the top story, even if it is
ranked lower than other articles by the Curate algorithm. In
addition, an editor can change the heading or the photo of an
article provided by a journalist, but in this case s/he will have to
enter the article in the journalists’ CMS system (Create) where the
original article was created and edit it there.
During the presentation of Curate the development team also
introduced a list of planned future functionalities for controlling
the algorithms:
1. control of news value and news lifetime:
integrating the journalists’ tags on news value and
news lifetime with the editing tool so that the
front-page editors can edit these parameters within
Curate
2. weighting parameters: possibility to access and
weight parameters such as clicks per minute
3. blend sub-feeds: control how much content should
be pulled from each sub-feed from the various
sections
Support for editing teasers on the front-page by combining
elements from the current front-editing tool Dr. Front with the new
automated tool included:
4. manually positioning teasers on the front-page
(drag and drop teasers in and out of columns
representing different areas of the front-page (e.g.,
the top section), and to alter the order of a column
5. editing headlines and images while in Curate
6. A/B testing for trying out different versions of a
teaser and measure how they generate readings
7. previewing auto-generated layout
8. grouping related content by linking teasers
9. internal notes with additional information about
teasers
This list was used as requirements for the prototypes that were
developed. In the first, rudimentary version of Curate, the front-
page editors had few and mainly indirect tools for actively and
dynamically curating the front-page. This first version of Curate
was used as the basis for prototyping how the editors in our
newspaper could work with curating the frontpage more directly.
4 Prototyping of automation with ML
The aim of the PD prototyping was to provide the front-page
editors with something very concrete to relate to when discussing
the automation of their work. The paper wireframes and
interactive prototypes aimed to explore how a user interface could
interact with and control the algorithmic functionality of Curate.
The basis for the prototypes was what the front-page editors
emphasized as important in their work.
The interactive prototype was made using HTML, JavaScript and
CSS, and it was run locally on the first authors computer.
Implementing the prototype in code rather than creating «flat»
sketches in a drawing tool (like Adobe Illustrator, Sketch) made it
easy to populate the prototype with real data. Additionally, this
made it possible to re-use CSS from other tools used by the
journalists (e.g., the Create CMS), so that the prototype would
appear as an integrated part of their tool suite (Figure 5).
4.1 Prototyping automated tasks
The paper wireframes were made to illustrate how the user
interface could incorporate automation supporting the editors’
curation work by selecting, organizing, and presenting news. One
paper wireframe was made to illustrate each of the future
functionalities in the list above.
In addition, aspects of the wireframes that addressed the layout
and altering of teasers were implemented as an interactive
prototype, see Figure 6. The interactive prototype was made to
explore how front-page editors would respond to not having
control over the visual appearance of the front-page, and simulated
how it could “feel” for the front-page editors to interact with and
control the algorithm for automatic curation and get support for
editing layout and teasers. The prototype consisted of a dashboard
view with overview of teasers and an editor view with access to
teaser details such as headline and image (Figure 6). A script
fetched data from the newspaper API, and populated the prototype
with real examples of headlines and images; real-life but not real-
time data.
The interactive prototype contained functionality for:
Setting the news value and lifetime of each teaser
directly through the dashboard view.
Dragging and dropping teasers in and out of
columns
Dragging and dropping teasers to alter the order of
each column.
Clicking a teasers editor icon to open an editor
view where text fields, news value and news
lifetime could be altered.
All alterations to text fields or the news value and lifetime were
purely visual. The prototype was not connected to any actual
front-page, hence it did not produce any output.
4.2 Evaluating the prototype
Discussing the prototype with the two front-page editors
pointed to improvements of the prototype from the position of the
work. The editors were skeptical about teasers automatically
appearing on the front-page: today all teasers are quality checked
by the front-page editors (in Dr. Front) before publishing. The
prototype displayed the top article from each sub-feed
automatically at the top section (e.g., sport, leisure), while the
editors wanted to be able to control and adjust this selection
according to different situations, such as the aforementioned
election night example. If automated curation is introduced, it will
be important to be able to override some of its results.
What the front-page editors missed most in the prototype was
the possibility to create drafts, which they often do in Dr. Front.
They make drafts to prepare teasers in advance, and to discuss the
wording and appearance of teasers with other people in the
newsroom before they become visible to the readers.
The two editors agreed that having “full control over visual details
such as font sizes was not … the most important”, however,
control of the appearance enable the editors to create more
nuanced teasers through, e.g., the size of the headline. The
prototype suggests a division of labor where the editor suggests
the words and select a standard formatting style (e.g., breaking
news, magazine). The editors cannot decide the font and size of
the teaser title directly.
The discussion about this concrete prototype confirmed that the
most important aspects of the front-page editors’ work are
concerned with “the mix”: the balance of news and the form of the
teasers that represents the newspaper and its identity consistently
over time should confirm its role as a serious medium in society
for its readers.
Figure 5. The interactive prototype looked like it was
integrated in the front-page editors’ tool suite, i.e., how it
would appear to the front-page editors.
Figure 6. The prototype of the dashboard of the editors’ new data-driven tool, Curate. Articles enter the list to the left when a
journalist publishes them, and the editor can locate them on the front-page by dragging them to the column for “On top” or
“Anywhere”. The editors can see and edit the title and teaser as well as the news value and lifetime set by the journalists.
One of the editors said that “if we were to curate the front-page
solely based on numbers, it would be all celebrities and sports on
top”. Another editor stated that “it is the editorial aspect, finding
the right images and headlines, that takes time”. From their
knowledge about each news case, the editors find a good title and
a picture to communicate the article. The front-page editors learn
what works for attracting readers through practice, seeing the
readers’ responses to their curation of the front-page. They look
for content that will make a good teaser, for example a conflict,
and use humor or puns to make it catchy. An algorithm can
identify phrases, types of sentences, photo croppings (eg., face vs.
overview) etc. that generate clicks, however, it cannot calculate a
good pun or wordplay, a good quote from the article, or decide
which of a set of portraits fits best with the news in question.
The experience with the interactive prototype allowed the editors
that participated in the workshop to discover and discuss concrete
tasks that they risk missing or want to do in a future curation
support system. They discussed how automatic actions would be
part of their work and what could be done automatically, and
where they wanted manual control.
Based on this case, there are several more generally applicable
points we want to make in the following discussion. One is to
discuss these prototypes as a design for the “wig” of Verne and
Bratteteig [59], another is to discuss implications from designing
for interacting with a data-driven open system. Finally, we discuss
the importance of an interactive prototype for designing for
limited automation.
5 Designing the limits of the automation
The case presented above is a “classic” PD case, where
workers are facing a future where their job will vanish or the job
content is severely transformed or reduced. However, we want to
make the point that the PD approach in this case is not “just” a
protest against change and a wish to keep work as it is now. The
current front-page editing tool (Dr. Front) provides the front-page
editors with the level of control necessary to exercise judgement
and calculate the relative importance of each teaser [27]. It
supports specific positioning of teasers, which the front-page
editors would like to keep but which is not supported by the new,
rudimentary tool Curate. Curate also does not support situational
awareness as the automation of parts of the curation makes the
news articles less visible to the editors until after publishing.
When parts of the curation is automatic the front-page editor loses
the overview of the front-page and hence the control over its
content. The news room loses its role as the center of the
newspaper.
5.1 Professional competence does not
automate
There are two aspects of the automation of front-page editing
that point beyond the apparent position of conservative users
wanting to maintain work as it is. Front-page editors exercise
judgment in their work. They evaluate the newspaper’s profile and
maintain the front-page “mix” in a continuously changing world.
Prioritizing news and curating the front-page is part of their
competence: we have seen it expressed as a wish to be able to
identify and edit article teasers that underperform. The judgment
and situated awareness needed to create a good front-page is
difficult or even impossible to automate. Secondly, the front-page
editors also care for the role that their newspaper plays in society,
balancing the responsibility to present important news with the
wish to satisfy customers. Curating the front-page to counteract
“filter bubbles” by presenting a responsible “mix” is also an
important part of their professional competence. We will come
back to these aspects of their work in the next section.
Here, the main point illustrated by the case is that it is possible to
meet automation plans with design from the point of view of the
work (and workers) also when data-driven automation is included
in the plans.
5.2 A holistic approach to automation
Instead of automation based on what automation can do, the
prototype builds on Verne and Bratteteig [59]’s suggestion to
design the automation with functionality to support the work that
editors carry out. The prototypes start with what is considered
important in work rather than what is possible to automate.
In our case, most tasks involved in manually curating the front-
page will be removed from the editors by the Curate system. Since
the CMS system Create is outside of Curate, teasers can be
improved by changing a picture or the header in Create, which can
be characterized as “residual” tasks (see figure 1).
Curate automates several of the front-page editors’ tasks currently
in Dr. Front, but the editors still have to monitor the front-page
and edit teasers manually. Some of the old tasks that they
appreciate, e.g., the cognitive tasks of exercising judgment, are not
accessible or only accessible in an indirect way: for example, they
need to change the parameters that the algorithms use (e.g., news
value and lifetime) in order to change the position of a teaser. As
the editors’ main concern is to secure that the front-page reflects
important events happening in the world, they need to be able to
update the teasers continuously as events unfold. The algorithms
do not “know” the events in the world before they have become
news articles written in Create: algorithms only work on historical
data.
The prototype in our case illustrates a “wig”, hence our case
demonstrates an approach to designing how automation tools can
be integrated in a work tool that maintains the parts of the work
that the editors consider most important. By working with and
trying out the prototypes (the paper wireframes and the interactive
prototype) different tasks and tools could be demonstrated for the
editors. This made their possible future work content concrete and
through this available for discussion.
When discussing the prototype, the editors were particularly
concerned with not being able to control “the mix” on the front-
page directly. We argue that designing the automation so that the
editors will maintain their access to directly influencing the front-
page is a way of designing “the wig”. To change a teaser by
leaving Curate (i.e., exit Curate, enter Create, where the article
was written, change the teaser there, save, and return to Curate to
pick it up again for automatic publishing) represents a more
fragmented and less logical way of carrying out this task.
Switching between tools may lead to more errors and mistakes.
Allowing the editors to change a teaser without leaving Curate is a
way of designing for a more coherent work process, i.e., for “the
wig”.
6 Automation in closed or open systems
The front-page is currently the result of the editors’ work
process formulating teaser titles, choosing an accompanying
photo, and placing the teaser on the front-page. The automation of
curating the front-page will place article teasers on the page
according to their ranking from news value and duration as well as
reader data. The new work of ranking the article can be distributed
between the journalist (importance and time) and the algorithm
processing reader data. In this perspective the editors will monitor
the front-page and adjust the teasers in designated fields on the
page, similar to other automated processes where the human
operator monitors a process and intervenes when the automation
gets it wrong or when there is a crisis (cf., [3, 61]).
This approach to automation presupposes that the production of
the front-page is a “closed system” where the internal relations
between the elements is the basis for all the front-page editors’
decisions. The composition of the concrete front-page includes
curating the best “mix” of teasers by forming them (headlines and
photos) as they appear in the sub-feeds: the curation concerns the
front-page as an object.
Newspapers and their front-pages are, however, not closed
systems: they should to some extent mirror a continuously
changing world. The presentation of news is itself a part of society
and hence influences the society. The front-page is not only an
object: it is also a medium. Knowledge about events in the world
is crucial for the selection, organizing and presenting of news that
constitutes front-page curation. Events in the world become news
when they are published, hence news cannot become data until
after someone has published them. Automatic presentation of
news will always be based on historical data. The part of front-
page editing that relates to the societal role of the newspaper has
to be a human task.
Being able to manually curate the preferred “mix” is therefore
important to support: the preset priorities of the sub-feeds may not
give the preferred “mix”. Designing “the wig” instead of the
automation allows for designing for the editors’ tasks directly.
Automation of the front-page editors’ work therefore has a wider
scope than the editors’ work content: automation of a newspaper’s
front-page also is important for the readers and in a wider sense
the society. The front-page editors’ work influences the news for
us as individual readers and as members of a society. We maintain
that the front-page editors’ emphasis on the “mix” signals their
responsibility for society. Societal responsibility cannot be
delegated to automatic processes [45, 16].
6.1 Algorithmic personalization
When the front-page is generated automatically, the readers’
clicks and reading behaviours that can be detected and counted by
the newspapers’ computers are important. Such data will be used
by the algorithms to dynamically present the front-page so that it
(in principle) reflects the readers’ interests. The “mix” on the
individual reader’s front-page will be a direct result of what the
reader clicks on. For the front-page as an open system, there will
be no direct control of the readers’ behaviors, and hence no
control of which data is generated and fed to the algorithms that
rank articles (e.g., [58]).
Personalization of the front-page content and layout will have the
effect that the editors no longer have a shared view and that they
do not know exactly what the readers see. Every reader will have
their own version of the front-page based on their clicks and other
datafied information [27] that is available about their interest and
preferences.
A possible consequence of a personalized view of the front-page is
the emergence of filter bubbles for groups of readers with similar
news preferences, which they get reinforced by their news [2, 9,
41]. Readers may only see news that confirm their worldview, and
will not be presented with news that challenge this view. Everyone
will have their own newspaper but risk missing out of important
articles that an editor could have curated for them.
Our case newspaper will take measures by providing Curate tools
to counteract filter bubbles by manually curating the “top” of the
front-page (1-3 news), and by manually monitoring the next
section that everybody will see by allowing the top news from
each of the sub-feeds (mind that the parameters of these sub-feeds
should be possible to set and change manually in Curate). It is
only the lower part of the front-page that will be automatically
generated based on “datafied reader” behavior. By this design,
Curate will give the editors their editorial leeway independent of
readers’ individual digitalized preferences. However, the way that
the automated curation operates will be a feature of the program,
parameters and algorithms taken together, and may change as the
tool change over time with increasing amounts of data. The editors
will be relegated to manipulating the parameters used by the
algorithms, a task quite different from manipulating the front-page
directly with a WYSISYG tool.
In a fully digitally personalized newspaper, there will be no
printed version that can act as a newspaper’s “official view” that
readers can use to evaluate their own personalized news feed.
Without it, the risk of filter bubbles and echo chambers increases.
However, in our newspaper prototype this is counteracted by
reserving parts of the front-page for manual curation also with its
new data-driven tool support. Designing a “wig” of tools for the
editors is important for manual control of the results from
automatic ranking and positioning of news articles.
7 Prototyping for designing limited
automation
PD is well set up to design a “wig” consisting of meaningful
and coherent tasks with and for the users. In PD, there is a long
tradition of engaging users in imagining their own work, e.g., the
classic UTOPIA project [20-22, 37, 57]. In UTOPIA, graphical
workers co-designed the work tools they wished to have together
with designers, by simulating tools that did not yet exist. They
used mock-ups: a cardboard box represented a laser printer and a
backdrop with a screen represented a new text editing tool for
offset printing. The mock-ups enabled the users to participate in
designing their work process (producing newspaper pages)
including designing the automated support for what they
considered important in work.
Mockups and Wizard of Oz techniques can be used to prototype
tools when the work is directed towards a concrete object or can
be seen as a closed system, also for ML experiences [15].
Producing a newspaper can be seen as a closed system only when
the competence and judgement involved concern the look and
quality of the concrete page / newspaper. However, a newspaper
that reflects events in the world will be as an open system. We will
argue that we need to go beyond mock-ups when the work
concerns a medium or service that both reflects events in the world
and has effects beyond itself: when the work is part of an open
system. An open system interacts with its context in unpredictable
ways, so that the parameters cannot be set in advance but need to
be continuously adjusted. The prototype in our case was a
working prototype that simulated the data-driven algorithm in a
concrete way. We argue that the prototype demonstrates that in
order to give the users a feeling for how it is to work with ML
(and we agree with [12] that this may be impossible) the prototype
does not need to implement a functioning ML. The results of ML
change over time as the data sets grow. More data is expected to
give better results, and the future experience with the prototype
will be difficult to evaluate with only a small initial data set
[12,31]. However, we argue that PD in such cases benefit from
prototypes that illustrate the automation based on real or at least
realistic data.
By concretely experiencing the new prototype tool: getting your
“hand-on the future" [22], it is possible to imagine how the work
tasks will change. It becomes easier to discuss which tasks should
be automatic and which not, and which tasks should be possible to
control by the human. Based on our case we argue that the
prototype has to be a functioning prototype rather than a paper
mock-up when ML and data-driven automation is part of the
future solution: the users should be able to experience how the
automatic actions will be part of their work and influence their
own tasks and their content. Getting the experience of what is
possible enables them to ask relevant questions about what is
possible to get from the tool as well as what they want to avoid.
With this as a basis, prototyping the dynamics of a ML system,
e.g., moving an article low on the front-page due to low clickbaits,
was not necessary for investigating what the front-page editors
valued in their work. The prototyping aimed at discussing the
editors’ manual work, not at prototyping the automation correctly.
ML is based on probabilistic computations on abstract
representations of quantitative data. Data may be wrong [8],
misleading or biased [14], and it is not obvious how this will
affect the results. In addition, in sophisticated ML there are many
layers of data and statistics [24, 25], hence it is not easy to see
through the processed result. The abstraction makes it difficult to
understand and get an overview, and discussing the effects of
different ways of automation when the automation is not
understood makes little sense. Automatic tools that perform
operations based on their processing of data are not easy to
imagine unless you experience how it could be in a concrete way
through a prototype.
The focus on a prototype that functions and can demonstrate how
it will work as a tool goes beyond “Wizard of Oz” and mock-ups
of the UTOPIA project, and is more similar to the prototype in the
contemporary Florence project. The Florence prototype was a
simulation of a patient administrative system in a hospital
although it was designed as a stand-alone system with no links to
systems with real patient data [5, 6]: to make it useful the nurses
manually entered real patient data during the pilot period. Our case
is also a functioning prototype where the data are simulated but
are real, and where the functioning of the prototype is based on
knowledge on how the data will appear in the work tool that is
being designed.
A prototype where the automatic decisions are transparent will
enable its users to better understand and control the automation.
Haapoja and Lampinen [26] tell about a news recommender
system “that used tracked reading time to recommend articles
from whitelisted websites” (p. 1). As the readers knew the rating
system, they also knew how to interpret the recommendations
because they understood the behavioral data that they were based
on (knowing about behavior leading to faulty data that could enter
the recommendation system, e.g., accidentally leaving the
computer while an article was open). Data-driven systems with
transparent data and known algorithms are a good basis for
simulations and discussions in a PD process.
Our case tells the story of a design taking the situated work into
account, indicating that a thorough understanding of the front-page
editors’ work is important for the design of the prototype.
Designing for the “wig” will always be design for situated work;
situated not only in the newspaper organization but also in society.
The editors’ judgmental decisions involve knowledge and
concerns that cannot be defined and described in precise detail for
all situations and circumstances.
The front-page editors have experienced how they can present
teasers to attract readers, and they want to avoid only celebrities
and sports on top of the front-page, which is what a merely data-
driven algorithm would give. They want to present a wider range
of news types and they know how they best can maintain a good
“mix”. Tools and tasks for exercising social responsibility in an
open system will be part of the “wig”. Automation of work will in
many cases have wider effects than the concrete consequences for
those who carry out the work. Media are work places that have a
role in society; the results of their work have societal
consequences. The quality of and results from their judgments
influence a larger public. Editors and journalists take care of the
quality of the judgments involved in good practice [52] as part of
their work.
8 Concluding remarks
In this paper, we have presented a case where a prototype was
developed in order to concretize how a data-driven tool can be part
of front-page editors’ work. The basis for the prototype was
intimate knowledge about front-page editors’ work, and the aim of
the prototype was to illustrate how the front-page editors could
maintain what they consider important professional skills and
competence when working with a more automatic and data-driven
tool. The prototype simulated the data-driven tool so that the front-
page editors could discuss and engage in the design of their future
work. The paper demonstrates that data-driven algorithms can be
simulated in prototypes and be used in PD processes. We argue
that the prototype needs to be interactive in order to give a realistic
experience of the automation and how to interact with it and that a
Wizard of Oz or a mock-up requires too much of an imaginary
leap from the users. The paper prototype gave the users a basis for
discussing the form of the tool, while the interactive prototype
gave them a basis for designing the interaction.
New technologies, like AI, ML and data-driven technologies,
represent big issues [17] for society and therefore also
challenges PD. We think it is important that the PD community
meet these challenges and continue to demonstrate that alternative
technology designs are possible. ML poses a particular challenge
to PD as it changes over time by learning new things as new data
are included during use: it is inherently unpredictable [1, 12, 31].
Moreover, it is based on digitally represented quantitative data that
will not represent all the concerns [55, 56] that, e.g., a front-page
editor has. Prototyping is and has always been an important
part of PD, but requires an understanding of the technology, i.e.,
the algorithms and data, and which role these can and should play
in a work process. Understanding the technical possibilities and
limitations is important for selecting what to prototype: a data-
driven system is difficult or impossible to prototype in a realistic
way.
The prototype we have presented is an example of designing for
people’s work rather than pushing (data-driven) automation to its
limits what Verne & Bratteteig [59] calls the “wig”. The “wig”
is a tool for thinking about a different approach to designing
automation in work, by emphasizing the human part rather than
the automation.
Today, several of the elements discussed and prototyped in the PD
process we have described in this paper have become parts of the
current Curate system version. Many of the needs identified in our
case have been confirmed by needs analyses carried out in other
newspapers in the media house. The media house has made use of
the prototype for designing a tool that supports the situated, open-
ended work of its newspaper editors rather than seeing the front-
page as a closed system and aiming for maximum automation.
ACKNOWLEDGMENTS
We thank the participants at the newspaper who were willing to
spend their time in this project.
REFERENCES
[1] Vegard Antun, Francesco Renna, Clarice Poon, Ben
Adcock, and Anders C. Hansen. 2019. On instabilities
of deep learning in image reconstruction - does AI come
at a cost? Cornell University, arXiv:1902.05300.
[2] Hunt Allcott and Matthew Gentzkow, 2017. Social
Media and Fake News in the 2016 Election, Journal of
Economic Perspectives. 31 (2): 211-236.
[3] Lisanne Bainbridge. 1983. Ironies of automation,
Automatica, 19 (6), 775-779.
[4] Liam Bannon and Pelle Ehn. 2013. Design Matters in
Participatory Design. Routledge international handbook
of participatory design. Jesper Simonsen and Toni
Robertson. London, Routledge.
[5] Gro Bjerknes and Tone Bratteteig. 1987. Florence in
Wonderland. System Development with Nurses, in Gro
Bjerknes, Pelle Ehn and Morten Kyng (eds): Computers
and Democracy. A Scandinavian Challenge. Avebury,
Aldershot.
[6] Gro Bjerknes and Tone Bratteteig. 1988. The memoirs
of two survivors or evaluation of a computer system
for cooperative work, Proceedings for The Second
Conference on CSCW, ACM, Sept. 26-28 1988,
Portland, Oregon: 167-177.
[7] Gro Bjerknes and Tone Bratteteig. 1995. User
Participation and Democracy. A Discussion of
Scandinavian Research on System Development,
Scandinavian Journal of Information Systems, 7 (1): 73-
98.
[8] Ingunn Björnsdottir and Guri Verne. 2018. Exhibiting
caution with use of big data: The case of amphetamine
in Iceland's prescription registry. Research in Social and
Administrative Pharmacy. 14(12): 1195-1202.
[9] Engin Bozdag and Jeroen van den Hove. 2015.
Breaking the filter bubble: democracy and design,
Ethics and Information Technology. 17 (4): 249-265.
[10] Eva Brandt, Thomas Binder, Elizabeth Sanders. 2012.
Ways to engage telling, making and enacting. Chapter 7
in J. Simonsen and T. Robertson (eds): Routledge
International Handbook of Participatory Design.
Routledge, 145-181.
[11] Tone Bratteteig, Keld Bødker, Yvonne Dittrich, Preben
Mogensen, and Jesper Simonsen. 2012. Methods:
Organizing Principles and General Guidelines for
Participatory Design Projects. Chapter 6 in Jesper
Simonsen and Toni Robertson (eds): Routledge
International Handbook of Participatory Design.
Routledge, 117-144.
[12] Tone Bratteteig and Guri Verne. 2018. Does AI make
PD obsolete? Exploring challenges form Artificial
Intelligence to Participatory Design. Proceedings of
Participatory Design Conference. Volume 2, article 8.
[13] Tone Bratteteig and Ina Wagner. 2014. Disentangling
Participation. Power and Decision-making in
Participatory Design, Springer CSCW series
[14] Meredith Broussard. 2018. Artificial unintelligence:
how computers misunderstand the world. Cambridge,
MA, MIT Press.
[15] Jacob T. Browne. 2019. Wizard of Oz Prototyping for
Machine Learning Experiences. Extended Abstracts of
the 2019 CHI Conference on Human Factors in
Computing Systems. Glasgow, Scotland UK, ACM: 1-6.
[16] Andrew Burton-Jones. 2014. What have we learned
from the Smart Machine? Information and
Organization. 24 /2014: 71-105.
[17] Susanne Bødker and Morten Kyng. 2018. Participatory
Design that MattersFacing the Big Issues. ACM
Transactions on Computer-Human Interaction. Vol. 25,
No. 1, 4:1-31.
[18] Graham Dove, Kim Halskov, Jodi Forlizzi and John
Zimmerman. 2017. UX Design Innovation: Challenges
for Working with Machine Learning as a Design
Material, CHI, Denver, Colorado, USA.
[19] Anthony Dunne and Fiona Raby. 2011. Design noir:
The Secret Life of Electronic Objects, Birkhäuser, Basel.
[20] Pelle Ehn. 1989. Work-oriented design of computer
artifacts. Arbetslivscentrum and Lawrence Erlbaum,
Hillsdale NJ.
[21] Pelle Ehn. 1993. Scandinavian design: on participation
and skill. In Schuler, D. & Namioka, A.(eds).
Participatory Design: Principles and Practices,
Lawrence Erlbaum, Hillsdale NJ. 41-77.
[22] Pelle Ehn and Morten Kyng. 1991. Cardboard
Computers: Mocking-it-up or Hand-on the Future. In
Joan Greenbaum and Morten Kyng (eds.) Design at
Work: Cooperative Design of Computer Systems. CRC
Press/Lawrence Erlbaum, New Jersey. 169-195.
[23] Asbjørn Følstad and Petter B. Brandtzæg. 2017.
Chatbots and the New World of HCI, interactions,
24(4):38-42.
[24] Lisa Gitelman. 2013. “Raw data” is an oxymoron,
MIT Press.
[25] Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
2016. Deep Learning, MIT Press.
[26] Jesse Haapoja and Airi Lampinen. 2018. ’Datafied’
Reading: Framing Behavioral Data and Algorithmic
News Recommendations. NordiCHI’18, Oslo.
[27] Sverre Nordberg-Schulz Hagen. 2019. Automation
design from situated work. Master Thesis, Department
of Informatics, University of Oslo, Norway.
[28] Björn Hartmann, Scott R. Klemmer, Michael
Bernstein, Leith Abdulla, Brandon Burr, Avi Robinson-
Mosher, and Jennifer. Gee. 2006. Reflective Physical
Prototyping through Integrated Design, Test, and
Analysis. UIST'06, ACM: 299-308.
[29] Björn Hartmann, Loren Yu, Abel Allison, Yeonsoo
Yang, and Scott R. Klemmer. 2008. Design as
Exploration: Creating Interface Alternatives through
Parallel Authoring and Runtime Tuning. UIST'08,
ACM: 91-100.
[30] Jennifer Hill, W. Randolph Ford, and Ingerid G.
Farreras. 2015. Real conversations with artificial
intelligence: A comparison between humanhuman
online conversations and humanchatbot conversations.
Computers in Human Behavior, 49: 245-250.
[31] Lars Erik Holmquist. 2017. Intelligence on Tap:
Artificial Intelligence as a New Design Material,
interactions, 24(4), pp. 28-33.
[32] Stephanie Houde and Charles Hill. 1997. What Do
Prototypes Prototype? in M.G. Helander, T.K. Landauer
& P.V. Prabhu eds: Handbook of Human-Computer
Interaction. Elsevier Science, Amsterdam, 367-381.
[33] Thomas R. Iversen and Suhas G. Joshi. 2015.
Exploring spatial interaction in assistive technology
through prototyping. Procedia Manufacturing Vol. 3:
158-165.
[34] Michael I. Jordan and T. M. Mitchell. 2015. Machine
learning: Trends, perspectives, and prospects. Science,
349 (6245): 255-260.
[35] Suhas G. Joshi. 2017. Designing for Capabilities: A
Phenomenological Approach to the Design of Enabling
Technologies for Older Adults. PhD dissertation.
University of Oslo.
[36] Robert Jungk and Norbert Müllert. 1987. Future
workshops: How to Create Desirable Futures. London,
England, Institute for Social Inventions
[37] Finn Kensing and Joan Greenbaum. 2012. Heritage.
Having a say. Chapter 2 in Simonsen, J. & Robertson,
T. (eds): Routledge International Handbook of
Participatory Design. Routledge, 21-36
[38] Scott R. Klemmer, Björn Hartmann, and Leila
Takayama. 2006. How Bodies Matter: Fine Themes for
Interaction Design. In Proceedings of DIS 2006, ACM,
140-149.
[39] Henrik Korsgaard, Clemens Nylandsted Klokmose,
Susanne Bødker. 2016. Computational alternatives in
participatory design: putting the t back in socio-
technical research. PDC 2016, ACM: 71-79.
[40] Q. Vera Liao, Muhammed Mas-ud Hussain, Praveen
Chandar, Matthew Davis, Yasaman. Khazaen, Marco
Patricio Crasso, Dakuo Wang, Michael Muller, N. Sadat
Shami and Werner Geyer. 2018. All Work and No Play?
Conversations with a Question-and-Answer Chatbot in
the Wild, CHI 2018, Montreal, QC, Canada.
[41] Regina March. 2012. With Facebook, Blogs, and Fake
News, Teens Reject Journalistic "Objectivity", Journal
of Communication Inquiry 36 (3): 246-262.
[42] Michael Muller, Ingrid Lange, Dakuo Wang, David
Piorkowski, Jason Tsay, Q.Vera Liao, Casey Dugan,
and Thomas Erickson. 2019. How Data Science
Workers Work with Data: Discovery, Capture, Curation,
Design, Creation. CHI 2019. Paper 126, 1-14.
[43] Michael Muller and Q. Vera Liao. 2017. Exploring AI
Ethics and Values through Participatory Design
Fictions. Human Computer Interaction Consortium,
Pajaro Dunes, Watsonville, CA, 2017.
[44] Kristen Nygaard. 1996. “Those Were the Days”? Or
“Heroic Times Are Here Again”?, Scandinavian
Journal of Information Systems, Vol. 8 : Iss. 2 , Article
6.
[45] Kristen Nygaard. 1986. K. Nygaard, När kunnskap blir
vara: Konsekvenser i samhälls- och yrkesliv av
kunskapsindustrin, Lecture at Linköping Conference on
Knowledge-Based Systems, Linköping, Sweden, 1986
[46] Raja Parasuraman, Thomas B. Sheridan, and
Chritopher D. Wickens. 2000. A Model for Types and
Levels of Human Interaction with Automation, IEEE
Trans. Of Systems, Man, and Cybernetics Part A:
Systems and Humans, Vol 30: Iss. 3.
[47] Elena Parmiggiani and Helena Karasti. 2018.
Surfacing the arctic: politics of participation in
infrastructuring. In Proceedings of the 15th
Participatory Design Conference: Short Papers,
Situated Actions, Workshops and Tutorial - Volume 2
(PDC '18), Liesbeth Huybrechts, Maurizio Teli, Ann
Light, Yanki Lee, Carl Di Salvo, Erik Grönvall, Anne
Marie Kanstrup, and Keld Bødker (Eds.), Vol. 2. ACM,
New York, NY, USA, Article 7, 5 pages. DOI
[48] Hope Reese. 2016. Why Microsoft’s ‘Tay’ AI bot went
wrong, TechRepublic. March 24. 2016
[49] Toni Robertson and Jesper Simonsen. 2012.
Participatory Design: An Introduction. Chapter 1 in
Simonsen, J. & Robertson, T. (eds): Routledge
International Handbook of Participatory Design.
Routledge, 1-18.
[50] Stuart Russell and Peter Norvig. 2010. Artificial
Intelligence: A modern approach, Pearson, Boston,
2010.
[51] Rita Sallam, Donald Feinberg, Mark Beyer, W. Roy
Schulte, Alexander Linden, Joseph Unsworth, Svetlana
Sicular, Nick Heudecker, Ehtisham Zaidi, Adam
Ronthal, Erick Brethenoux, Pieter den Hamer, and Alys
Woodward. 2019. Top 10 Data and Analytics Trends
That Will Change Your Business. Gartner group
Thoughtspot, April 2019
[52] Kjeld Schmidt. 2014. The Concept of ‘Practice’:
What’s the Point? Proceedings of the 11th International
Conference on the Design of Cooperative Systems.
COOP 2014.27-30 May 2014, Nice (France). C.
Rossitto, L. Ciolfi, D. Martin and B. Conein, Springer
International Publishing: 427-444.
[53] Donald A. Schön. 1992. Designing as reflective
conversation with the materials of a design situation.
Knowledge-Based Systems 5(1): 3-14.
[54] Jesper Simonsen and Toni Robertson. eds. 2012.
Routledge International Handbook of Participatory
Design. Routledge.
[55] Markus Stolze. 1993. The Workshop Perspective:
Beyond Optimization of the “Joint Man-Machine
Cognitive System”. In Working Notes AAAI 93 Fall
Symposium Human-Computer Collaboration:
Reconciling Theory, Synthesizing Practice.pp. 113-118.
[56] Lucy A. Suchman. 2006. Human-Machine
Reconfigurations: Plans and Situated Actions.
Cambridge University Press, New York, NY, USA.
[57] Yngve Sundblad. 2010. UTOPIA: Participatory Design
from Scandinavia to the World. Third IFIP WG 9.7
Conference History of Nordic Computing (HINC 3).
Stockholm.
[58] Peter Tolmie, Andy Crabtree, Tom Rodden, J. Colley,
and E. Luger. 2016. ”This has to be the cats” Personal
Data Legibility in Networked Sensing Systems.
CSCW’16, ACM: 491-502.
[59] Guri Verne and Tone Bratteteig. 2016. Do-it-yourself
services and work-like chores: on civic duties and
digital public services. Personal Ubiquitous Comput.
20, 4 (August 2016), 517-532.
[60] Madisson Whitman, Chien-yi Hsiang, and Kendall
Roark. 2018. Potential for participatory big data ethics
and algorithm design: a scoping mapping review. In
Proceedings of the 15th Participatory Design
Conference: Short Papers, Situated Actions, Workshops
and Tutorial - Volume 2 (PDC '18), Liesbeth
Huybrechts, Maurizio Teli, Ann Light, Yanki Lee, Carl
Di Salvo, Erik Grönvall, Anne Marie Kanstrup, and
Keld Bødker (Eds.), Vol. 2. ACM, New York, NY,
USA, Article 5, 6 pages.
[61] Terry Winograd. 1972. Understanding Natural
Language, Academic Press
[62] Q. Yang, A. Scuito, J. Zimmerman, J. Forlizzi and A.
Steinfeld. 2018. Investigating How Experienced UX
Designers Effectively Work with Machine Learning,
DIS’18, Hong Kong,
[63] Shoshana Zuboff.1988. In the Age of the Smart
Machine. The Future of Work and Power. Basic Books.
[64] Shoshana Zuboff. 2019. The Age of the Surveillance
Capitalism: The Fight for a Human Future at the New
Frontier of Power. PublicAffairs.
Conference Paper
Full-text available
Digital twins (DTs) are one form of datafication. They are virtual reflections of physical world entities, of objects and phenomena, and are rapidly becoming an asset for innovation. There is a growing body of literature on DTs in various technology-related fields. A critical thread has emerged within this body, warning on the danger to forget that the digital part is always only a partial representation of real life, and that this partiality is always selective and biased for a specific purpose. It may thus serve some group of stakeholders better than others. We contribute with a literature review on the current understanding and use of the DT concept in the field of HCI. Our results consolidate the current understanding of DTs’ potential in HCI and note the omission of the critical perspective within the reviewed literature. The paper opens discussion of what HCI can bring to DT development and use.
Conference Paper
Full-text available
With the rise of big data, there has been an increasing need for practitioners in this space and an increasing opportunity for researchers to understand their workflows and design new tools to improve it. Data science is often described as data-driven, comprising unambiguous data and proceeding through regularized steps of analysis. However, this view focuses more on abstract processes, pipelines, and workflows, and less on how data science workers engage with the data. In this paper, we build on the work of other CSCW and HCI researchers in describing the ways that scientists, scholars, engineers, and others work with their data, through analyses of interviews with 21 data science professionals. We set five approaches to data along a dimension of interventions: Data as given; as captured; as curated; as designed; and as created. Data science workers develop an intuitive sense of their data and processes, and actively shape their data. We propose new ways to apply these interventions analytically, to make sense of the complex activities around data practices.
Conference Paper
Full-text available
In this paper¹, we explore if and how Artificial Intelligence (AI) challenges Participatory Design (PD). We base our analysis on the basic characteristics of AI and its subfield Machine Learning and discuss how and what kinds of design decisions users are able to participate in when technology that includes AI is designed. We conclude that AI challenges PD but that classic PD methods can be useful for parts of the design process. However, AI poses new challenges to PD all originating in the fact that AI technologies change unpredictably over time.
Conference Paper
Full-text available
The ongoing adoption of sensor networks, algorithms, and digital data comes with the promise of opening up participation in knowledge production. However, the dynamics of the participatory design (PD) processes in these infrastructuring endeavors remain underspecified. This short paper draws on a study of an oil company's project to design an open digital platform to produce knowledge about the Arctic marine environment. Fraught with political controversies, this effort encompasses several stakeholders with contrasting agendas. Leveraging the relational quality of infrastructure, we propose to revitalize the political roots of PD by problematizing simultaneously the roles of human and non-human participants, foregrounding both digital technology and the monitored natural ecosystems. We discuss how infrastructuring aimed at letting humans visualize the inaccessible, also shapes participation by creating spaces of (in)visibility and concentrating control over knowledge creation in the hands of the most powerful stakeholders.
Conference Paper
Full-text available
Machine learning (ML) plays an increasingly important role in improving a user’s experience. However, most UX practitioners face challenges in understanding ML’s capabilities or envisioning what it might be. We interviewed 13 designers who had many years of experience designing the UX of ML-enhanced products and services. We probed them to characterize their practices. They shared they do not view themselves as ML experts, nor do they think learning more about ML would make them better designers. Instead, our participants appeared to be the most successful when they engaged in ongoing collaboration with data scientists to help envision what to make and when they embraced a data-centric culture. We discuss the implications of these findings in terms of UX education and as opportunities for additional design research in support of UX designers working with ML.
Article
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
Conference Paper
Machine learning is being adopted in a wide range of products and services. Despite its adoption, design and research processes for machine learning experiences have yet to be cemented in the user experience community. Prototyping machine learning experiences is noted to be particularly challenging. This paper suggests Wizard of Oz prototyping to help designers incorporate human-centered design processes into the development of machine learning experiences. This paper also surfaces a set of topics to consider in evaluating Wizard of Oz machine learning prototypes.
Conference Paper
There are increasing concerns about how people discover news online and how algorithmic systems affect those discoveries. We investigate how individuals made sense of behavioral data and algorithmic recommendations in the context of a system that transformed their online reading activities into a new data source. We apply Goffman's frame analysis to a qualitative study of Scoopinion, a collaborative news recommender system that used tracked reading time to recommend articles from whitelisted websites. Based upon ten user interviews and one designer interview, we describe 1) the process through which reading was framed as a `datafied' activity and 2) how behavioral data was interpreted as socially meaningful and communicative, even in the absence of overtly social system features, producing what we term `implicit sociality'. We conclude with a discussion of how our findings about Scoopinion and its users speak to similar issues with more popular and more complex algorithmic systems.
Conference Paper
Ubiquitous networked data collection and algorithm-based information systems have the potential to disparately impact lives around the planet and pose a host of emerging ethical challenges. One response has been a call for more transparency and democratic control over the design and implementation of such systems. This scoping mapping review focuses on participatory approaches to the design, governance, and future of these systems across a wide variety of contexts and domains.