Conference PaperPDF Available

Abstract and Figures

Human-centred design (HCD) is a powerful methodology that might play an important role in the development of real-world intelligent systems. However, present conceptualisations of artificial intelligence (AI) tend to emphasise autonomous, algorithmic systems. If humans are not involved in AI system design, what role can HCD play? This paper considers perspectives that reframe the role of AI in smart systems design, with the intention of creating space for human-centred design methodologies. These perspectives naturally give rise to opportunities for HCD by considering human and artificial intelligence in tandem. Informed by cybernetic theory, we define smart systems as "the use of outcome data to inform successful system action". To illustrate the practicality of this view, we share three case studies, each representing a different smart system configuration: artificial intelligence, human intelligence and combined artificial-human intelligence. We describe Battleship Numberline, an educational game with autonomous artificial intelligence. We then describe Zensus, a smart system for health and well-being that leverages human intelligence alone. Finally, we describe FactFlow, educational software that combines artificial and human intelligence. By examining the cybernetic feedback loops observed in these systems, we contribute a practical framework for the use of human-centred design methodology in smart systems design. This framework is intended as both a generative tool for designers and a basis for future research in the field of smart systems.
Content may be subject to copyright.
Proceedings of TMCE 2020, 11-15 May, 2020, Dublin, Ireland, edited by I. Horváth and G. Keenaghan
Organizing Committee of TMCE 2020, ISBN/EAN: 978-94-6384-131-3
143
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE FOR
HUMAN-CENTRED DESIGNERS
Caiseal Beardow
Faculty of Industrial Design Engineering
Delft University of Technology
caiseal.r.beardow@bath.edu
Willem van der Maden
James Lomas
Faculty of Industrial Design Engineering
Delft University of Technology
w.l.a.vandermaden@tudelft.nl, dereklomas@gmail.com
ABSTRACT
Human-centred design (HCD) is a powerful
methodology that might play an important role in the
development of real-world intelligent systems.
However, present conceptualisations of artificial
intelligence (AI) tend to emphasise autonomous,
algorithmic systems. If humans are not involved in AI
system design, what role can HCD play? This paper
considers perspectives that reframe the role of AI in
smart systems design, with the intention of creating
space for human-centred design methodologies. These
perspectives naturally give rise to opportunities for
HCD by considering human and artificial intelligence
in tandem. Informed by cybernetic theory, we define
smart systems as “the use of outcome data to inform
successful system action". To illustrate the
practicality of this view, we share three case studies,
each representing a different smart system
configuration: artificial intelligence, human
intelligence and combined artificial-human
intelligence. We describe Battleship Numberline, an
educational game with autonomous artificial
intelligence. We then describe Zensus, a smart system
for health and well-being that leverages human
intelligence alone. Finally, we describe FactFlow,
educational software that combines artificial and
human intelligence. By examining the cybernetic
feedback loops observed in these systems, we
contribute a practical framework for the use of
human-centred design methodology in smart systems
design. This framework is intended as both a
generative tool for designers and a basis for future
research in the field of smart systems.
KEYWORDS
Human-Centred Design, Artificial Intelligence,
Cybernetics, Smart Systems, System Design
INTRODUCTION
In this paper, we consider how we might reframe the
use of artificial intelligence in systems design. AI
seemingly prohibits human involvement by
definition—after all, such involvement would contest
its artificiality. However, the exclusion of humans is
naturally a problematic approach with regards to
human-centred design.
Presently, AI system design can appear to only call for
computer engineering and data science. This raises the
question: is human-centred design relevant to the
design of AI systems? Human-centred design—
broadly defined as an “approach to systems design and
development that aims to make interactive systems
more usable by focusing on the use of the system”
[1]—encompasses valuable design methods such as
context mapping and need finding [2] that enable
designers to apply its principles in diverse situations,
insofar as AI systems must integrate with existing
socio-technical processes and support the needs and
values of system stakeholders. It then appears that
HCD could be invaluable to AI system design.
Rather than designing for artificial intelligence, which
assumes humans to be absent from an intelligent
system, we reframe our objective as designing for
overall system intelligence—i.e., designing for smart
systems. With smart systems as the frame, people and
technology both clearly play a role in supporting
intelligent system action. Within this frame, we
envision design methods that can meaningfully
improve system outcomes in healthcare, education,
and other socio-technical systems—sometimes with
AI and sometimes without, but always with
intelligence.
144 Caiseal Beardow, Willem van der Maden, James Lomas
TOPIC OF INVESTIGATION
This paper investigates the implementation of both
human and artificial intelligence in smart systems
design. We first examine and synthesise definitions of
intelligence from the fields of AI and cognitive
science. Through our literature review, we
subsequently establish support for smart systems
design—both human and non-human—in
fundamental theories of cybernetics. Our inquiries are
followed by case studies that demonstrate
configurations of intelligence in smart systems. This
results in a design framework that draws upon human-
centred design practice to encourage more nuanced
consideration of human and artificial intelligence,
with respect to overall system intelligence.
The core model presented by this paper is based on a
basic logical assertion regarding the potential
configurations of smart systems, shown in Figure 1.
These configurations include artificial intelligence
(namely, without human involvement), human
intelligence (without AI involvement) and a
combination of AI and human intelligence. Our case
studies illustrate the generative nature of this
approach, which advocates for designers to develop
feedback loops centred on metrics of value. This paper
aims to aid human-centred designers in recognising
these feedback loops as the essential nature of a smart
system—not the involvement of a particular class of
statistical or machine learning algorithm.
What is Smart?
The vernacular use of "smart" in design indicates the
presence of prodigious amounts of technology (smart
phones, smart cities and smart cards, to name a few).
This terminology is adequate as a consumer marketing
strategy, but may obscure the underlying properties of
intelligence if similarly used in the context of smart
systems design.
A 2019 definition of "Smart Classrooms", for
instance, encapsulates a typical viewpoint that
technology itself makes systems smart: “Smart
classrooms are technology-enhanced classrooms that
foster opportunities for teaching and learning by
integrating learning technology, such as computers,
specialized software, audience response technology,
assistive listening devices, networking, and
audio/visual capabilities” [3].
Does such a classroom, filled with technology, qualify
as smart? The presence of technology alone does not
seem to be a sufficient criterion. Similarly, a system
that employs machine learning is not automatically
smart either. While marketing campaigns have created
a public perception that the addition of powerful,
cutting-edge machine learning algorithms will almost
magically produce better outcomes, this is clearly a
misconception, as practitioners in related fields can
surely attest.
A more principled definition of smartness seems in
order. However, reaching a practical consensus on the
nature of smartness and intelligence can appear too
challenging a task to be practicable. For this reason,
we begin by referring to a pragmatic and well-known
definition of intelligence from the classic film Forrest
Gump.
The Forrest Gump Theory of Intelligence
Forrest Gump's theory of intelligence was, famously,
that "stupid is as stupid does." The character of Forrest
is his own best proof: while he might have failed
standardised intelligence tests, his decision-making
proved to be enormously successful. Applied to smart
systems, this theory indicates that “smart is as smart
does”—systems are smart when they act to support
overall system success. The concept of actions-based
success is key to many accepted definitions of
intelligence, in its various forms.
Defining Intelligence
AI researchers Shane Legg and Marcus Hutter define
intelligence as “an agent’s general ability to achieve
goals in a wide range of environments” [4]. The
successful achievement of goals is therefore core to
artificial intelligence. Goal achievement is the agent’s
prerogative; its behaviour is continuously modified
based on environmental data that is influenced by its
previous actions. The computational nature of
Figure 1 Forms of intelligence and associated smart
system configurations, as illustrated by our
case studies.
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE 145
artificial agents allows them to excel in particular at
dataset collection and algorithmic development.
These are the means by which an AI agent perceives,
assesses and responds to its changing environment.
Legg and Hutters's popular definition of (artificial)
intelligence was not developed in a vacuum, but based
on a synthesis of dozens of definitions from
psychology. Robert Sternberg’s Theory of Successful
Intelligence, in a near mirror of Forrest Gump’s
definition, is stated as: “...the ability to achieve one’s
goals in life, given one’s sociocultural context”[5].
Another example is J. S. Albus’ definition of
intelligence: “...to act appropriately in an uncertain
environment, where appropriate action is that which
increases the probability of success” [6]. Additionally,
David Poole states that an intelligent agent acts in a
way that is “appropriate for its circumstances and its
goal” and is “flexible to changing environments and
changing goals” [7]. One further definition is provided
by Peter Norvig, director of research at Google, who
defines intelligence as "the ability to select an action
that is expected to maximise a performance measure”
[8]. In these definitions, we see clear and significant
similarities between artificial and human intelligence:
contextually appropriate actions, leading to success
whatever that may entail.
Contrary to the definitions discussed above, the
common perception of artificial intelligence views it
as something distinct and separate from human
intelligence. In fact, the involvement of human
information processing is sometimes viewed as a
failure of AI. In actuality, established definitions of
artificial and human intelligence share the same core
characteristics as Forrest Gump’s action-based theory:
intelligence, at its core, involves achieving goals
successfully. By extension, a smart system (and
therefore the actors within it) must act in a manner that
supports the overall system goal—in other words, its
measure of success. Additionally, behavioural
responses are contingent upon contextual factors that
are changeable and sometimes unpredictable. A smart
system must therefore make judgements based on its
goal and the knowledge of its environment amassed
thus far.
Despite the similarities between human and artificial
intelligence, one cannot be said to be equivalent to the
other. The question then arises of how both forms of
intelligence might be brought together, in order to
reframe the design of system intelligence.
Cybernetics and Intelligent Feedback
Loops
Relative to smart systems, cybernetics appears to be a
somewhat widely discussed concept in design-related
literature (Figure 2). Cybernetics offers a
philosophical perspective that permits treating human,
artificial and system intelligence on equal grounds.
Cybernetics is derived from the Greek kubernetes,
meaning ‘steersman’, which is the etymological origin
of "government". The term was coined by the
mathematician and philosopher Norbert Wiener.
Wiener developed a conceptualisation of goal-
directed sensor-actuator feedback loops when
designing anti-aircraft weapons during World War II
[9].
After the war, Wiener became a pacifist and promoted
his cybernetic feedback concepts in the radically
interdisciplinary Macy conferences, held during the
50s and 60s [10]. He used ‘cybernetics’ to describe a
new, integrated practice that sought to “find the
common elements in the functioning of automatic
machines and of the human nervous system” [11].
Cybernetics has been applied to ecosystems,
economics and even the practice of human-centred
design [12]. Pangaro succinctly encapsulates
cybernetics as “the science of effective action” [13].
In the cybernetic approach, we can see clear
opportunities for the integration of HCD methods,
especially considering that HCD addresses “impacts
on a number of stakeholders, not just those typically
considered as users” [1]. The cybernetic perspective
both acknowledges and leverages this complexity, in
comparison to more traditional forms of smart systems
design where a single entity of governance is assumed.
It offers scope for the design of smart systems with
multiple governing entities, as in an institution or
multi-stakeholder system. The exact configuration of
intelligence will of course vary depending on context,
but—crucially—the cybernetic perspective treats all
Figure 2 A Google Ngram showing the relative
prevalence of “Cybernetics”, “Artificial
Intelligence” and “Smart Systems” in English-
language books from the 1940s to the early
2000s.
146 Caiseal Beardow, Willem van der Maden, James Lomas
such configurations as equally viable.
Defining Smart Systems
Informed by the approaches to system intelligence
discussed above, we define smart systems through the
operationalisation of feedback loops, involving “the
use of outcome data to inform successful system
action". This definition allows us to frame the design
of smart systems as inclusive of both human and
computational processes.
The perspective of this paper is that systems are
“smart” when there exist cybernetic feedback loops
involving data-informed action. Smart systems exhibit
the circularity, intentionality and interplay of
cybernetics—messages (from sensors) and responses
(of actuators) are connected in a dynamic feedback
loop, each element inseparable from the overall
system. Smart systems may then be considered as a
development of cybernetics theory in a socio-
technical context. Cybernetics offers designers a
perspective with which to approach the complexity
and emergent nature of smart systems, as well as the
information flows within them.
This framing of Smart Systems does not significantly
diverge from earlier definitions. For instance, Georges
Akhras states that the primary goal of smart systems
is to “act and react” in a manner that is predictable and
functional using “sensors and actuators that are either
embedded in or attached to the system to form an
integral part of it” [14]. He frames his definition in
explicitly human terms, defining the human body as
“the ultimate smart system”.
There is an increasing tendency for smart systems to
be designed as fully autonomous, where decision-
making is the sole responsibility of an artificially
intelligent agent—albeit in pursuit of goals pre-
determined by the system’s creator(s). In these
systems, it is the informed decision-making of an AI
agent that imbues the system with the adaptability
known as ‘smartness’. (Despite this, the agent is not
‘smart’ in isolation; sensors and actuators that make
up the system are integral to its adaptability, and
therefore to its intelligence.) Compared to a cybernetic
approach, such autonomous system designs raise
questions concerning how they may, or may not, be
aligned to human values and intentions. As articulated
by Norvig’s previously discussed definition of
intelligence, system intelligence is contingent upon
maximising performance measures. The nature of
these measures can thus have significant
consequences for system stakeholders. The system
intelligence configurations explored in this paper are
unified by the presence of outcome measures. From
the cybernetic viewpoint we present, no individual
configuration is inherently superior—rather, as human
and artificial intelligence are seen as equally valid
system components, so too are smart systems that
utilise them.
METHODOLOGICAL APPROACH
The utility of a cybernetics-informed approach to
smart systems design lies in the recognition of human
intelligence as a valid system component. This
contrasts with a traditional approach towards
artificially intelligent system design, which treats the
involvement of human processes as outside its gamut.
To illustrate how this perspective can be used in
design, we present and analyse three examples of
recently designed smart systems that involve either
AI, human intelligence or both. These systems aimed
to improve outcomes in education and health care.
Each system was designed in the context of human-
centred design institutions (Carnegie Mellon HCI
Institute and Delft's Industrial Design Department);
the specific design methods varied.
The first of our examples, Battleship Numberline,
employs autonomous Artificial Intelligence to
optimise an outcome measure in an educational game.
The second example, Zensus, uses human intelligence
in medical systems to optimise measures of wellness
in patient care. In our final example, FactFlow, the use
of AI and human intelligence is combined to help
maximise a measure of child math fact fluency. As our
definition of smart systems permits both
computational and human intelligence, our examples
thus span the total range of combinations of these
forms of intelligence in smart systems. Each example
utilises iterative outcome measure optimisation in
varying forms, depending on the system’s context and
metrics of success.
COMPLETED WORK
Battleship Numberline: Smart Systems
with Artificial Intelligence
In this case study, an online educational game
(Battleship Numberline) was designed with the goal of
motivating students to practice number line estimation
math problems [15]. Following its deployment online,
the game attracted several thousand students per day.
This population was the basis for a series of system
design optimisation experiments. The online players
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE 147
were randomly assigned to different game design
variations (Figure 3) in order to observe the effects of
these designs on key outcome metrics. Our metric for
success was player engagement, operationalised as the
voluntary duration of play and measured as the
number of trials completed in each condition.
To investigate the role of AI in system design
optimisation, we implemented and tested several
reinforcement learning algorithms known as “Multi-
armed Bandits”. Multi-armed bandits will randomly
search a design parameter space for configurations
that maximise metrics. This algorithm was used to
automatically test variations in the existing game
parameter space (e.g., time limit) and maximise the
success metric automatically, based on system
feedback. The algorithm was designed to optimally
balance the exploration of potential game designs with
exploitation of those that were most successful. As a
result, users were statistically more likely to receive a
version of the game that was optimal for a specified
outcome.
While the algorithm worked as intended, the system
became somewhat uncontrolled and primarily
deployed malformed game designs. The algorithm
successfully selected designs that maximised the
outcome metric yet, the metric was clearly misaligned
with the original educational intent. The malformed
designs were engaged with for noticeably longer
periods of time than others, which the algorithm
iteratively detected and reinforced by deploying a
proportionally greater number of similar designs.
In this case study, the system acted diligently to
achieve its goal according to the only measure it had.
It used relevant system data to inform its actions and
optimised its output for the measure of success with
which it had been programmed, thus behaving in a
manner to be expected of an artificially intelligent
agent. However, the system’s behaviour failed to
account for human interpretation of its output. It is
easy to imagine that the students, who encountered
increasingly preposterous game environments,
focused more on novel game mechanics than the
mathematical skills they were intended to test. This
was compounded by the fact that the AI agent also
detected a correlation between (low) difficulty level
and extent of voluntary play, thus presenting students
with increasingly easy and amusingly odd designs. In
such cases, the intended educational value of the
system appeared to have been lost altogether. Students
were certainly engaged by the task at hand, but were
not being challenged and consequently not improving
their skill level.
This case study clearly demonstrates the pitfalls of AI
systems engaging in automatic optimisation with
misaligned success metrics and stakeholder impacts.
Had human behaviours (such as a preference for
novelty, which was in this case unintentionally
exploited) and values been more effectively
integrated, the system’s actions could have been
iteratively shifted to balance the algorithmic
optimisation of an AI agent and the contextual insight
of system stakeholders, such as the students’
educators. Reflecting on this particular case, designers
should account for the monitoring of AI in such
systems to ensure that outputs are not only optimised
for success metrics, but are meaningfully aligned with
system intentions.
Zensus: Smart Systems with Human
Intelligence
Our second case study, Zensus, is a smart system
designed to support medical systems, using patient
wellness data to enhance care outcomes. Patient-
reported outcome measures, (patient symptoms, pain,
exercise, or psychological well-being, for example)
can be a powerful source of data in medicine.
However, it is currently difficult for doctors to collect
patient wellness data over time. Zensus seeks to
address this issue through an interactive system that
lets doctors schedule “wellness check-ins” via
patients’ mobile phones. This allows for data
collection after they leave the hospital setting and go
about their daily lives. Patient data is then used to
inform medical practitioners’ actions through
Figure 3 Design variations of Battleship Numberline. In
the game, players estimate the location of a
hidden submarine, which is revealed when a
player clicks the estimated location.
148 Caiseal Beardow, Willem van der Maden, James Lomas
wellness outcome measures, thus transforming
medical care into a smart system. Data is labeled and
categorised via the interface to ensure consistency
over time and ease of interpretation for medical
practitioners.
The Zensus mobile environment is designed to make
self-reporting on wellness a positive experience. It
introduces the notion of a “two-minute wellness
check-in” to ensure patients of the measurement’s
brevity. If a doctor schedules two wellness check-ins
per week for two months, the patient is then
committing to spend a total of 32 minutes to self-
reporting. The system also allows doctors to compose
messages to patients in response to their answers,
which could potentially support automated AI
interventions (for example, suggested topics to discuss
based on patients’ recent data and medical history).
Zensus does not currently employ AI within its smart
system, but instead adds intelligence to existing
medical systems. Medical systems are already
intelligent, insofar as there are measures and scales
that doctors use to chart patient progress. The
innovative nature of Zensus lies in providing wellness
measures that include both physical and psychological
elements (Figure 4) and can be monitored effectively
over time. The system easily enables practitioners to
use different questions and survey instruments, but
provides an emphasis on models of human well-being
from positive psychology, which are used to constitute
the notion of ‘wellness’.
Ultimately, the aim of Zensus is to create data
feedback loops (Figure 5) involving patient wellness
outcomes that will result in more personalised and
effective care. Zensus can make insight into the
behavioural needs of patients—such as exercise, sleep
and socialisation—more accessible and clear for
practitioners, in addition to supporting long-term
treatment plans. It collects and aggregates relevant
data, presented in the form of wellness outcome
measures, that can be interpreted by medical
practitioners, whose medical and contextual
knowledge best equips them to make informed
decisions in their patients’ interests. Additionally, the
existing patient-practitioner relationship is leveraged
as a source of trust and an anchor around which the
digital interface is built. In this way, the needs of both
patient and practitioner are considered and met
through the means of a smart system. This draws upon
principles of human-centred design, where the needs
of all system stakeholders are considered as important
design objectives. Non-users and inter-stakeholder
relationships are treated as key elements of the
system’s design.
Although the system is currently governed solely by
human intelligence, AI algorithms could be
introduced in future to provide automated
recommendations for treatment and practitioner-
patient interactions. By drawing upon an AI agent’s
ability to track and analyse larger datasets over time,
suggestions could be produced for check-up topics
and perhaps even probable co-morbidities, based on
patients’ medical history and self-reported data. This
would undoubtedly be of benefit in increasingly
complex and saturated healthcare systems, where
consultations are (by necessity) often brief. Zensus
also aims to support medical research studies that use
longitudinal wellness measures (i.e. repeated
measurement over time, as utilised in Zensus’
scheduled check-in system). Many medical research
studies aim to identify interventions that improve
patient quality of life. Zensus can serve as a platform
that facilitates interactions between medical
researchers and practitioners through data. Finally,
there is mounting evidence that overall wellness
correlates significantly with hospital readmissions and
chronic disease-management costs [16]. By
facilitating measurement of patient wellness
outcomes, Zensus supports human intelligence
Figure 4 Various user interfaces employed by Zensus
for psychological wellness data acquisition.
Figure 5 Zensus is designed to support the above
feedback loop to enhance system intelligence.
By facilitating the collection of patient well-
being data, Zensus aims to help intelligent
human decision-making.
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE 149
through digital means, in a smart system that can
improve the quality and efficiency of medical care.
FactFlow: A Smart System that
Combines Artificial and Human
Intelligence
Our third case study, FactFlow, is a smart system for
helping children learn to develop math fact
fluency. Fact fluency involves the automatic,
effortless retrieval of basic math facts. Fluency helps
improve conceptual math learning by reducing
cognitive load, as factual knowledge need not be
calculated. Fact fluency is relatively rare in America;
recent data from the National Center on Educational
Statistics (NCES) indicates that 19% and 29% of
students, in 4th and 8th grades respectively, perform
below basic levels in mathematics proficiency [17].
There exist many digital tutors that can help children
improve fact fluency. Based on a child’s pattern of
responses, algorithms are used to predict the child’s
abilities in different skills, enabling more appropriate
content difficulty and skill selection. Furthermore,
these algorithms support and model the rate at which
children forget facts, which allows practiced items to
be spaced optimally for information retention.
(Repetitive practice of a single fact tends to be more
effective when spread over time.) At the same time, it
is common for parents to take an active interest in their
children’s education and to work with them to develop
their arithmetic abilities. However, parents (and
indeed all humans) do not possess the capacity to
remember every fact encountered by their children in
previous sessions. It is here that the digital processing
ability of AI can support the parent-child relationship.
The design of FactFlow uses AI algorithms that are
intended to help support parent-child interactions in an
educational context. The key objective of FactFlow is
to efficiently and measurably improve Math Fact
Fluency. The success metric for this system is
therefore the number of correct answers per minute,
using items drawn randomly from a set of math facts
within which children of varying ages are expected to
have fluency. For example, 1st grade children practice
only addition and subtraction, while 5th graders and
above are expected to know math facts concerning
addition, subtraction, multiplication and division.
Weekly assessments (Figure 6) are used to determine
the number of correct answers per minute that a child
can answer. Items are randomly pulled from a bank of
items representing children’s target ability upon
completing their current grade. Because of this
randomisation, improvements in the number of items
answered correctly per minute represent real gains in
fluency. As the metric “number correct per minute” is
somewhat abstract and difficult for parents to
conceptualise, a goalpost measure is used for 100%
fluency. For instance, in 4th grade, 12 items per
minute represents 100% fluency, so a child answering
6 items per minute has 50% fluency. This aims to
provide parents with a clear goal for their child’s
progress.
FactFlow’s algorithms provide parents with an
optimised sequence of math facts to practice with their
children, but the system is, by design, reliant on
parental involvement. Parents are able to use human
intelligence to motivate their children and coach them
effectively when they need help. The system AI
provides indications of progress through weekly
assessments, ability tracking and skill mastery
identification, but these indications must be processed
and acted upon by the parent. In this way, artificial and
human intelligence are integrated into a smart system,
with the former supporting the latter. With a combined
intelligence system such as this, clear opportunities
arise for human-centred design methods. Although the
child is the end user, their relationship with key
stakeholders (namely, their parent) is central to the
system’s success. The system is designed to embody
humanistic values of educational support and active
parenting, utilising human and artificial intelligence as
distinct but compatible system components.
Figure 6 On the left, the parent-mediated adaptive
system activity. Parents read the question aloud
and then select whether their child answers
quickly, slowly, incorrectly, etc. In the middle,
a child-facing activity offers traditional
multiple choice questions. On the right, an
example weekly assessment in FactFlow, using
percentage fluency per minute of use as a
success metric.
150 Caiseal Beardow, Willem van der Maden, James Lomas
Although the theoretical benefits of this system are
clear, further research is required to ascertain its
efficacy in practice. A potential study could be
executed as follows: 40 parents will be recruited to
participate online, using paid advertisements. They
will be paid $20 each for completing a one month
session, with 4 weekly assessments. They will also be
sent daily practice reminders with 2 minutes of
optimised math practice. Participants will be
randomly assigned into two experimental conditions.
In one condition, parents will deliver the facts
themselves (parent-mediated adaptive system in
Figure 6) and in the other condition, parents will
provide their child with a digital program for
individual practice (child-facing adaptive system in
Figure 6). In both conditions, the algorithm that selects
facts to practice will be the same. The study will
compare improvement on each of the weekly
assessments, using the initial assessment as a baseline
to determine progress. This will allow us to evaluate
any improved efficacy as a result of the involvement
of human intelligence in a smart system that is
otherwise driven by AI.
Evaluation of system efficacy is based upon the gains
in percentage of fluency per minute of use (the
system’s metric of success). In using this data to
optimise the design of the FactFlow system, the
development team becomes a data-informed smart
system in itself.
OUTCOME AND FINDINGS
The case studies detailed above show how smart
systems may incorporate human and artificial
intelligence on equal grounds—a perspective that is
often lost when viewed from the frame of designing
for artificial intelligence. Each case offers different
findings, with implications for design, that will be
outlined below.
Our first case study, Battleship Numberline,
demonstrates that system metrics are not necessarily
an accurate translation of designers’ intentions and
values, which can lead to unexpected and potentially
detrimental results. Measures that initially appear to
be appropriate can cause a system to deviate from its
original purpose if they do not account for the human
behaviours that are not explicitly represented by
incoming data. Algorithmic optimisation, left
unchecked by a system’s designers and stakeholders,
can unintentionally thwart designers’ efforts.
Our second case study, Zensus, shows how
contextually relevant data acquisition and
representation can support existing systems that are
driven by human (as opposed to artificial) information
processing. Sometimes the inclusion of AI is not the
most contextually appropriate decision—instead,
“smart” data feedback loops can be focused around
human participants, with meaningful and positive
outcomes.
In our third case study, FactFlow, artificial and human
intelligence can theoretically complement each other
in smart systems—provided that their respective
strengths and weaknesses are accounted for. AI might
process and recall large datasets far beyond human
capabilities, but humans can contextually assess the
suggestions it provides in ways that AI cannot
emulate. This shows that the involvement of human
intelligence is not necessarily a failure of artificial
intelligence, but rather a useful feature of smart
system design.
SIGNIFICANCE OF THE WORK AND
FINDINGS
We offer a definition of smart systems, "the systematic
use of outcome data to inform successful actions”, and
illustrate this with three case studies (Figure 7). In this
definition, outcome data is acquired in relation to
metrics for success. Intelligent actions are inclusive of
both artificial and human behaviours, but are
characterised by adaptability and purposive iteration
towards a specific goal. In light of the findings of our
case studies, we place particular emphasis on the
definition of humanistic values and outcome metrics,
so that smart systems can be aligned with human-
centred designers’ intentions whilst self-optimising
and adapting to environmental changes. This
Figure 7 Feedback models for the three case
studies—using artificial intelligence,
human intelligence and a combination of
both
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE 151
definition is therefore intended to engage human-
centred design practitioners in contributing to
successful smart systems implementation.
Abstracting from these feedback loops, we present a
model of smart systems and associated heuristic
account (Figure 8) that can act as a generative
framework for designers. As seen in Figures 7 and 8,
our model concerns the creation of feedback loops that
are common to all system configurations explored in
this paper, and indeed to intelligence generally, as
previously discussed. Whether such feedback loops
are artificial or based in human processes—or a
combination of the two—our model provides a basis
for more humanistic smart systems design.
As seen in the figure above, increasing measurability
of outcome metrics (i.e. allowing stakeholders to
identify areas of need) and enabling appropriate
response actions to modify the system, allows system
designers to iteratively improve the system as a whole.
These two aspects of system improvement can be
abstracted to context-specific tasks for system
designers, who can in turn draw upon appropriate
design methods to achieve them.
Implications for Design
Our findings suggest an opportunity for change in how
we as designers approach smart systems, from self-
contained algorithms to a wider interplay between
artificial and human actors. Existing HCD design
methods can play a role in this shifting approach.
System mapping (outlining the current state of a
system and the points at which possible change or
improvement could be enacted [18]) can be
undertaken to identify key points for data acquisition,
in relation to outcome measures. The data acquisition
process should also be considered thoroughly,
including methods for collection, dimensions of
analysis, interpretation and potential responses.
System mapping can also aid designers in identifying
potential sites of intervention, as well as the system
actors and stakeholders involved. This allows for the
construction of a potential action space in which a
variety of possible responses can be explored
concurrently.
Designers’ ability to acquire and analyse qualitative
data can be leveraged to address the human values
underlying a smart system, in addition to determining
which quantitative data sources within a system are
most appropriate for constructing metrics that account
for these values. Broadly speaking, these design tasks
can be split into two categories: facilitating
measurement of outcomes, and of actions in response,
as shown in our system model (Figure 8). This model
indicates that there may be a need for the development
of methods that can help designers appropriately align
human values and needs to metrics. Human-centred
design methodologies may aid designers in
approaching this issue, in addition to guiding smart
system design generally, as discussed above.
Human and Artificial Intelligence in
Smart Systems
The three case studies offer a systemic framework for
enhancing the data feedback loops that are central to
smart systems. Our model of feedback in smart
systems illustrates the behaviours and components
that are common to smart systems of various
configurations: system components take action to
improve outcomes against measures that are based on
the forms of relevant data available to that system.
Crucially, outcome measures should not only take into
account relevant data, but be aligned to the values and
intentions of system designers, who have a nuanced
understanding of the qualitative factors at play.
This is the case for systems of all configurations,
but—as demonstrated by our first case study,
Battleship Numberline—value alignment is
particularly important in systems where artificial
intelligence is the primary actor. In the first case study,
fully autonomous AI systems were shown to have
unintended consequences if allowed to optimise for
success metrics, at least when there are no humans “in
the loop”. More specifically, system outputs evolve
over time in iterations that are progressively further
misaligned with human intentions for the system. This
suggests the necessity of human governance in such
cases, to monitor AI activity and gauge the
appropriateness of outputs in relation to intentions.
Figure 8 Above, a model of feedback in Smart Systems.
Below, a heuristic account to support design.
152 Caiseal Beardow, Willem van der Maden, James Lomas
Furthermore, our first case study demonstrated how
optimising for success metrics that do not fully
encapsulate system intentions can cause
misalignment. In Battleship Numberline, the human
intention for the system was for students to practice
mathematics voluntarily in their free time. However,
the chosen success metric—voluntary time spent on
task—did not account for whether students’
performance was improving concurrently. A
combined metric of time spent and performance may
therefore have been more appropriate to optimise.
Sometimes metrics are chosen out of engineering
convenience; advocacy for value-metric alignment
may prove to be a key role for HCD designers.
Our second case study shows that systems need not
include artificial intelligence to be considered ‘smart’.
A digital interface collects and aggregates patient data
over time, but the system’s intelligence is entirely
human—medical practitioners compare this data with
outcome measures to inform their actions in treating
patients. The Zensus system facilitates a data feedback
loop that is aligned with humanistic outcome
measures (wellness). In fact, considering Akhras’
definition, Zensus creates an interplay between two
smart systems —one socio-technical (medical care),
and the other biological (the human body).
Our third case study demonstrates how AI information
processing can be used to support human intelligence,
and vice versa. FactFlow’s system is dependent upon
parents’ ability to motivate and coach their children in
order to be successful, but likewise parents rely on the
algorithm’s ability to optimise fact items and are
guided by their child’s performance data. Artificial
intelligence processes complex numerical information
and provides optimised content for each session,
whilst human intelligence modulates parent-child
interactions. The result is a human-integrated smart
system that plays to the strengths of both artificial and
human agents. Theoretical benefits of such a system
seem clear, but further research is required to
empirically ascertain its efficacy.
Future of Intelligence
Robert Sternberg’s Theory of Successful Intelligence
poses several implications for the future of Smart
Systems. Extending his definition as previously
discussed, Sternberg defines human intelligence as
“...the ability to achieve one’s goals in life, given
one’s sociocultural context; by capitalizing on
strengths and correcting or compensating for
weaknesses; in order to adapt to, select and shape
environments; through a combination of analytical,
creative and practical abilities” [5].
In recent years, Sternberg has proposed an Augmented
Theory of Successful Intelligence. In addition to the
analytical, creative and practical, this theory posits a
fourth dimension of intelligent behaviour: wisdom. He
interprets wisdom as the ability to “ensure that
implementation of the ideas will help ensure a
common good through the mediation of positive
ethical principles” [19].
In Sternberg’s Augmented Theory, an individual not
only seeks to achieve their goals, but to align these
goals to a collective pursuit of ‘goodness’ and well-
being. This description illustrates an ethically
motivated aspect of human intelligence that can
equally be applied to artificial intelligence. By
extension, it also suggests that smart systems—
systems driven by intelligence—should ultimately be
designed to contribute to a global notion of well-
being. As previously discussed, Peter Norvig defines
intelligence as "the ability to select an action that is
expected to maximize a performance measure” [8]. In
this case we wish to maximise well-being, but this
begs the question of how we measure and, ultimately,
quantify well-being. The human-centred design
approach discussed in this paper may certainly
provide a starting point, but its efficacy remains to be
seen. Sternberg’s concept of wisdom as a dimension
of intelligence, if applied to the intelligence of
systems, may provide rich opportunities to explore
this further.
The Risks of Metrics
While the approach advocated by our work could
provide a powerful perspective for human-centred
designers, there are some potential risks that should be
made explicit. Key Performance Indicators (KPIs) and
other human-facing metrics play a major role in
corporate system management. It is increasingly
recognised that metrics, and particularly those that are
incentivised, can lead to frankly stupid behaviour. For
instance, the Wells Fargo Bank set success metrics for
sales staff performance that resulted in the creation of
millions of counterfeit bank accounts. Naturally, this
resulted in a dramatic loss of value for the company
[20].
The idea behind this example is that metrics can be
mistaken for strategy. While metrics may be signals
that a strategic goal is being achieved, improvements
in metrics are not the same as strategic goal
achievement [20]. Further to this, some metrics are
DESIGNING SMART SYSTEMS: REFRAMING ARTIFICIAL INTELLIGENCE 153
difficult to quantify or are otherwise only measurable
after the fact, which can affect the adaptability and
longitudinal efficacy of a system.
Limitations
In this paper, our goal was to show that the frame of
smart systems allows designers to consider whether
and how to incorporate human intelligence into
system design, alongside artificial intelligence. As
such, we do not present a quantitative comparison of
the three systems. Instead, we simply aim to show that
smart systems can integrate human intelligence with
artificial intelligence and that sometimes human
intelligence is fully sufficient for making a system
smarter.
We also do not present evidence for the efficacy of our
design framework or feedback model. As this model
was used to inform the design of the second and third
case studies, there is some face-value evidence that the
framework can be generative during a design process.
However, further work is required to comprehensively
evaluate the utility of the model for system designers
and associated disciplines.
Finally, cybernetics and artificial intelligence boast a
long history, of which this paper only skims the
surface. Even in the comparatively nascent field of
smart systems, there are many varying
conceptualisations—and we do not claim to represent
or summarise all of them here.
CONCLUSION
To more clearly conceptualise the importance and
contribution of human stakeholders in system designs,
we reframe the objective of smart systems design from
designing for AI to designing for intelligence.
This paper aims to provide guidance for designing
smart systems, through the implementation of
outcome data feedback loops and the use of human
centred design. HCD can play a major role in smart
system design—namely, mapping existing systems to
understand needs, values and opportunities for
measurement and intervention. Further, designers can
play a critical role in aligning qualitative needs and
values to quantitative metrics. More traditionally,
designers play an important role in creating usable and
engaging UI/UX for collecting relevant data and
delivering interventions.
As illustrated by our case studies, it is both possible
and advantageous to consider smart systems design
from a more holistic standpoint than the algorithm-
centric perspective that prevails today. However,
moving from algorithms to systems presents designers
with new challenges. Drawing upon the practice and
philosophy of cybernetics can aid designers in
examining complex systems and the processes with
which they are designed. By framing artificial and
human intelligence as co-existing, potentially
complementary entities, designers can develop system
metrics that align well to stakeholder values and
needs. Among the many opportunities for continued
research, we are particularly inspired by Sternberg's
theory of wisdom, which suggests how smart systems
might eventually evolve into wise systems if they can
embody support for global well-being. How to design
such a system is as of yet unclear, but it is our hope
that the approach presented by this paper can generate
dialogue and collaboration between smart systems
designers and related disciplines.
REFERENCES
[1] International Organization for Standardization.
(2019). Ergonomics of human-system interaction
Part 210: Human-centred design for
interactive systems (Standard no. 9241-210).
Retrieved from:
https://www.iso.org/standard/77520.html.
[2] Sleeswijk Visser, F., Stappers, P. J., van der Lugt,
R., Sanders, E. B.-N. (2005). Contextmapping:
experiences from practice. In CoDesign 1(2), pp.
119–149.
[3] Kuppusamy, P. (2019). Smart Education Using
Internet of Things Technology. In Emerging
Technologies and Applications in Data
Processing and Management (pp. 385-412). IGI
Global.
[4] Legg, S., Hutter, M. (2007). Universal
Intelligence: A Definition of Machine
Intelligence. In Minds and Machines 17(4), pp.
391–444.
[5] Sternberg, R. (2005). The Theory of Successful
Intelligence. In Interamerican Journal of
Psychology 205 (39), pp. 189-202.
[6] Albus, J. S. (1991). Outline for a Theory of
Intelligence. In IEEE Trans. on Systems, Man,
and Cybernetics 21(3), May/June 1991.
[7] Poole, D., Mackworth, A., Goebel, R. (1998).
Computational Intelligence: A logical approach.
Oxford University Press, New York, NY, USA.
154 Caiseal Beardow, Willem van der Maden, James Lomas
[8] Russell, S. J., Norvig, P. (2009). Artificial
intelligence: a modern approach. Prentice Hall,
NJ, USA.
[9] Ramage, M. (2009). Norbert and Gregory: Two
strands of cybernetics. Information,
Communication & Society, 12(5), pp. 735-749.
[10] Dubberly, H., Pangaro, P. (2015). How
cybernetics connects computing, counterculture,
and design. In Hippie Modernism: The Struggle
for Utopia. Walker Art Center, Minneapolis,
MN, USA.
[11] Weiner, N. (1950). The human use of human
beings. Cybernetics and Society. Boston,
Houghton Mifflin Co, 71.
[12] Krippendorff, K. (2007). The Cybernetics of
Design and the Design of Cybernetics. In
Kybernetes 36 (9/10), pp. 1381-1392.
[13] Pangaro, P. (2017). Cybernetics as Phoenix: Why
Ashes, What New Life? In Cybernetics: State of
the Art, pp. 16-33.
[14] Akhras, G. (2000). Smart materials and smart
systems for the future. In Canadian Military
Journal, Autumn 2000, pp. 25-32.
[15] Lomas, J. D., Forlizzi, J., Poonwala, N., Patel, N.,
Shodhan, S., Patel, K., Koedinger, Brunskill, E.
(2016). Interface Design optimization as a Multi-
Armed Bandit Problem. In CHI’16.
[16] Benjenk, I., Chen, J. (2018). Effective mental
health interventions to reduce hospital
readmission rates: a systematic review. In
Journal of Hospital Management and Health
Policy 2(45).
[17] National Assessment of Educational Progress,
2010. Retrieved from:
https://catalog.data.gov/dataset/2010-national-
assessment-of-educational-progress
[18] Both, T. (2018). Human-Centered, Systems-
Minded Design. In Stanford Social Innovation
Review, retrieved from:
https://ssir.org/articles/entry/human_centered_s
ystems_minded_design#
[19] Sternberg, R. J. (2011). The theory of successful
intelligence. In R J. Sternberg & S. B. Kaufman
(Eds.), Cambridge handbook of intelligence (pp.
504-527). New York: Cambridge University
Press.
[20] Harris, M., & Tayler, B. (2019). Don't Let
Metrics Undermine Your Business. In Harvard
Business Review 97(5), pp. 63-69.
... As large, overarching plans have a tendency to fail, the authors advocate for a formal method known as "muddling through": small, incremental changes at many levels of the organization. In this section, we describe My Wellness Check as a cybernetic "wellbeing feedback loop" that can assess wellbeing and inform responsive action Pangaro, 2019 and2010;Beardow et al, 2020)/ These feedback loops are visualized in Figure 1. ...
... Because I knew he wouldn't fill out a wellbeing survey himself (it would annoy him-I asked), I envisioned how my mother might regularly report on various aspects of his wellbeing. So, I designed a prototype system, Zensus, to help caregivers assess overall patient wellbeing (Beardow et al, 2020) using an approach called Ecological Momentary Assessment (EMA). In short, these are simply messages sent to a smartphone on a regular basis. ...
Conference Paper
Full-text available
My Wellness Check" is a wellbeing assessment system designed to help universities systematically support student and staff wellbeing. In this paper, we present a narrative describing the human-centered design process used to develop a context-sensitive wellbeing feedback system within a large technical university during the COVID19 pandemic. We share quantitative and qualitative findings from the first 2 feedback cycles, where wellbeing assessments were sent to over 30,000 students and staff. By involving community members and decision-makers in the qualitative data analysis, we successfully translated results into administrative policy and community action. Our ongoing design research project highlights the desirability and feasibility of wellbeing feedback loops within large complex systems.
... As large, overarching plans have a tendency to fail, the authors advocate for a formal method known as "muddling through": small, incremental changes at many levels of the organization. In this section, we describe My Wellness Check as a cybernetic "wellbeing feedback loop" that can assess wellbeing and inform responsive action Beardow et al, 2020)/ These feedback loops are visualized in Figure 1. ...
... Because I knew he wouldn't fill out a wellbeing survey himself (it would annoy him-I asked), I envisioned how my mother might regularly report on various aspects of his wellbeing. So, I designed a prototype system, Zensus, to help caregivers assess overall patient wellbeing (Beardow et al, 2020) using an approach called Ecological Momentary Assessment (EMA). In short, these are simply messages sent to a smartphone on a regular basis. ...
Book
Full-text available
Editorial The RSD10 symposium was held at the faculty of Industrial Design Engineering, Delft University of Technology, 2nd-6th November 2021. After a successful (yet unforeseen) online version of the RSD 9 symposium, RSD10 was designed as a hybrid conference. How can we facilitate the physical encounters that inspire our work, yet ensure a global easy access for joining the conference, while dealing well with the ongoing uncertainties of the global COVID pandemic at the same time? In hindsight, the theme of RSD10 could not have been a better fit with the conditions in which it had to be organized: “Playing with Tensions: Embracing new complexity, collaboration and contexts in systemic design”. Playing with Tensions Complex systems do not lend themselves for simplification. Systemic designers have no choice but to embrace complexity, and in doing so, embrace opposing concepts and the resulting paradoxes. It is at the interplay of these ideas that they find the most fruitful regions of exploration. The main conference theme explored design and systems thinking practices as mediators to deal fruitfully with tensions. Our human tendency is to relieve the tensions, and in design, to resolve the so-called “pain points.” But tensions reveal paradoxes, the sites of connection, breaks in scale, emergence of complexity. Can we embrace the tension and paradoxes as valuable social feedback in our path to just and sustainable futures? The symposium took off with two days of well-attended workshops on campus and online. One could sense tensions through embodied experiences in one of the workshops, while reframing systemic paradoxes as fruitful design starting points in another. In the tradition of RSD, a Gigamap Exhibition was organized. The exhibition showcased mind-blowing visuals that reveal the tension between our own desire for order and structure and our desire to capture real-life dynamics and contradicting perspectives. Many of us enjoyed the high quality and diversity in the keynotes throughout the symposium. As chair of the SDA, Dr. Silvia Barbero opened in her keynote with a reflection on the start and impressive evolution of the Relating Systems thinking and Design symposia. Prof.Dr. Derk Loorbach showed us how transition research conceptualizes shifts in societal systems and gave us a glimpse into their efforts to foster desired ones. Prof.Dr. Elisa Giaccardi took us along a journey of technologically mediated agency. She advocated for a radical shift in design to deal with this complex web of relationships between things and humans. Indy Johar talked about the need to reimagine our relationship with the world as one based on fundamental interdependence. And finally, Prof.Dr. Klaus Krippendorf systematically unpacked the systemic consequences of design decisions. Together these keynote speakers provided important insights into the role of design in embracing systemic complexity, from the micro-scale of our material contexts to the macro-scale of globally connected societies. And of course, RSD10 would not be an RSD symposium if it did not offer a place to connect around practical case examples and discuss how knowledge could improve practice and how practice could inform and guide research. Proceedings RSD10 has been the first symposium in which contributors were asked to submit a full paper: either a short one that presented work-in-progress, or a long one presenting finished work. With the help of an excellent list of reviewers, this set-up allowed us to shape a symposium that offered stage for high-quality research, providing a platform for critical and fruitful conversations. Short papers were combined around a research approach or methodology, aiming for peer-learning on how to increase the rigour and relevance of our studies. Long papers were combined around commonalities in the phenomena under study, offering state-of-the-art research. The moderation of engaged and knowledgeable chairs and audience lifted the quality of our discussions. In total, these proceedings cover 33 short papers and 19 long papers from all over the world. From India to the United States, and Australia to Italy. In the table of contents, each paper is represented under its RSD 10 symposium track as well as a list of authors ordered alphabetically. The RSD10 proceedings capture the great variety of high-quality papers yet is limited to only textual contributions. We invite any reader to visit the rsdsymposium.org website to browse through slide-decks, video recordings, drawing notes and the exhibition to get the full experience of RSD10 and witness how great minds and insights have been beautifully captured! Word of thanks Let us close off with a word of thanks to our dean and colleagues for supporting us in hosting this conference, the SDA for their trust and guidance, Dr. Peter Jones and Dr. Silvia Barbero for being part of the RSD10 scientific committee, but especially everyone who contributed to the content of the symposium: workshop moderators, presenters, and anyone who participated in the RSD 10 conversation. It is only in this complex web of (friction-full) relationships that we can further our knowledge on systemic design: thanks for being part of it! Dr. JC Diehl, Dr. Nynke Tromp, and Dr. Mieke van der Bijl-Brouwer Editors RSD10
Chapter
Full-text available
Smart education is now a typical feature in education emerging from information communications technologies (ICT) and the constant introduction of new technologies into institutional learning. The smart classroom aims users to develop skills, adapt, and use technologies in a learning context that produces elevated learning outcomes which leads to big data. The internet of things (IoT) is a new technology in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. The technologies are rapidly changing, and designing for these situations can be complex. Designing the IoT applications is a challenging issue. The existing standardization activities are often redundant IoT development. The reference architecture provides a solution to smart education for redundant design activities. The purpose of this chapter is to look at the requirements and architectures required for smart education. It is proposed to design a scalable and flexible IoT architecture tor smart education (IoTASE).
Article
Full-text available
Background: Hospitals in the United States are financially penalized for having a higher than expected thirty-day readmission ratio among patients initially hospitalized for heart failure, acute myocardial infarction (AMI), pneumonia, chronic obstructive pulmonary disease (COPD), coronary artery bypass graft (CABG) surgery, or hip and knee replacement. Patients hospitalized for these conditions that have comorbid mental health diagnoses or symptoms are at high risk for readmission. Methods: We conducted a systematic review to determine if interventions, that are specifically designed to assess or treat mental health symptoms, can effectively reduce risk of readmission following hospitalization for physical health conditions. We searched on PubMed and Google Scholar for peer-reviewed articles published between January 2010 and June 2018 that examined the impact of mental-health interventions on readmissions for physical conditions. Results: After screening 81 full text articles, we found eleven intervention studies, one meta-analysis, and one cross-sectional study that met our inclusion criteria. Only three of the intervention studies found significant differences in readmission rates between intervention and comparison groups. Each of these interventions targeted patients after discharge from the hospital. One of the interventions was a physical health telemonitoring and individual psychotherapy intervention for patients that were initially admitted for heart failure. The second intervention was individual and group psychotherapy sessions for patients who were initially admitted for AMI. The third intervention was a nurse-driven depression care management protocol for home care patients with depressive symptoms who were initially admitted for any physical health condition. The cross-sectional study showed that communities with a stronger, social-based public mental health infrastructure had significantly lower physical health readmission rates. Conclusions: The literature identified in this review, appears to provide support for the use of mental health interventions after discharge as a mechanism for reducing physical health condition readmissions. Future research is needed to determine if these interventions can specifically reduce thirty-day readmissions for the six conditions linked to financial penalties.
Conference Paper
Full-text available
"Multi-armed bandits" offer a new paradigm for the AI-assisted design of user interfaces. To help designers understand the potential, we present the results of two experimental comparisons between bandit algorithms and random assignment. Our studies are intended to show designers how bandits algorithms are able to rapidly explore an experimental design space and automatically select the optimal design configuration. Our present focus is on the optimization of a game design space. The results of our experiments show that bandits can make data-driven design more efficient and accessible to interface designers, but that human participation is essential to ensure that AI systems optimize for the right metric. Based on our results, we introduce several design lessons that help keep human design judgment in the loop. We also consider the future of human-technology teamwork in AI-assisted design and scientific inquiry. Finally, as bandits deploy fewer low-performing conditions than typical experiments, we discuss ethical implications for bandits in large-scale experiments in education.
Article
Full-text available
In recent years, various methods and techniques have emerged for mapping the contexts of people's interaction with products. Designers and researchers use these techniques to gain deeper insight into the needs and dreams of prospective users of new products. As most of these techniques are still under development, there is a lack of practical knowledge about how such studies can be conducted. In this paper we share our insights, based on several projects from research and many years of industrial practice, of conducting user studies with generative techniques. The appendix contains a single case illustrating the application of these techniques in detail.
Article
Full-text available
Purpose – The purpose of this paper is to connect two discourses, the discourse of cybernetics and that of design. Design/methodology/approach – The paper takes a comparative analysis of relevant definitions, concepts, and entailments in both discourse, and an integration of these into a cybernetically informed concept of human-centered design, on the one hand, and a design-informed concept of second-order cybernetics, on the other hand. In the course of this conceptual exploration, the distinction between science and design is explored with cybernetics located in the dialectic between the two. Technology-centered design is distinguished from human-centered design, and several axioms of the latter are stated and discussed. Findings – This paper consists of recommendations to think and do things differently. In particular, a generalization of interface is suggested as a replacement for the notion of products; a concept of meaning is developed to substitute for the meaninglessness of physical properties; a theory of stakeholder networks is discussed to replace the deceptive notion of THE user; and, above all, it is suggested that designers, in order to design something that affords use to others, engage in second-order understanding. Originality/value – The paper makes several radical suggestions that face likely rejection by traditionalists but acceptance by cyberneticians and designers attempting to make a contribution to contemporary information society.
Article
This article reviews the theory of successful intelligence and attempts to construct-validate the theory of successful intelligence. It describes 4 distinct converging operations that have been used in these attempts. Two sets involve internal validation of the structure of the theory and 2 sets external validation of the theory with outside criteria. The internal validation operations involve information-processing (componential) analyses and both exploratory and confirmatory factor analyses. The external validation operations involve correlational analysis and analyses of instructional interventions based on the theory. The results are generally supportive of the theory and suggest that conventional conceptions of intelligence may be too narrow. The theory is of use in consulting because it broadens the scope of skills one looks for in seeking "intelligent" people for hiring, retention, and promotion and in assessing a person's ability to do his or her current job. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
In this article, I shall examine the way in which information was central to the development of cybernetics. I particularly contrast the different uses of the concept by two key participants in that development – Norbert Wiener, who argued that information was a quasi-physical concept related to the degree of organisation in a system; and Gregory Bateson, who considered information to be a process of human meaning formation. I suggest that these two authors exemplify a hard and a soft strand of cybernetics, present from the start of the field. I trace through these two different interpretations of information as they developed in the cybernetics movement, and on the way they have fed into more recent understandings of information within cybernetics and related fields, especially in family therapy and sociology. I also relate these ideas to the cyborg theory of Donna Haraway and others.