ArticlePDF Available

Design Recommendations for Self-Monitoring in the Workplace: Studies in Software Development


Abstract and Figures

One way to improve the productivity of knowledge workers is to increase their self-awareness about productivity at work through self-monitoring. Yet, little is known about expectations of, the experience with, and the impact of self-monitoring in the workplace. To address this gap, we studied software developers, as one community of knowledge workers. We used an iterative, user-feedback-driven development approach (N=20) and a survey (N=413) to infer design elements for workplace self-monitoring, which we then implemented as a technology probe called WorkAnalytics. We field-tested these design elements during a three-week study with software development professionals (N=43). Based on the results of the field study, we present design recommendations for self-monitoring in the workplace, such as using experience sampling to increase the awareness about work and to create richer insights, the need for a large variety of different metrics to retrospect about work, and that actionable insights, enriched with benchmarking data from co-workers, are likely needed to foster productive behavior change and improve collaboration at work. Our work can serve as a starting point for researchers and practitioners to build self-monitoring tools for the workplace.
Content may be subject to copyright.
Design Recommendations for Self-Monitoring in the
Workplace: Studies in Software Development
ANDRE N. MEYER, University of Zurich
GAIL C. MURPHY, University of British Columbia
THOMAS ZIMMERMANN, Microsoft Research
THOMAS FRITZ, University of Zurich and University of British Columbia
One way to improve the productivity of knowledge workers is to increase their self-awareness about productivity
at work through self-monitoring. Yet, little is known about expectations of, the experience with, and the impact
of self-monitoring in the workplace. To address this gap, we studied software developers, as one community of
knowledge workers. We used an iterative, user-feedback-driven development approach (N=20) and a survey
(N=413) to infer design elements for workplace self-monitoring, which we then implemented as a technology
probe called WorkAnalytics. We field-tested these design elements during a three-week study with software
development professionals (N=43). Based on the results of the field study, we present design recommendations
for self-monitoring in the workplace, such as using experience sampling to increase the awareness about work
and to create richer insights, the need for a large variety of different metrics to retrospect about work, and that
actionable insights, enriched with benchmarking data from co-workers, are likely needed to foster productive
behavior change and improve collaboration at work. Our work can serve as a starting point for researchers and
practitioners to build self-monitoring tools for the workplace.
CCS Concepts:
Human-centered computing User studies
;Field studies;
Software and its engineering
Software creation and management;
Additional Key Words and Phrases: Quantified Workplace, Self-Monitoring, Productivity Tracking, Personal
Analytics, Workplace Awareness
ACM Reference format:
Andre N. Meyer, Gail C. Murphy, Thomas Zimmermann, and Thomas Fritz. 2017. Design Recommendations
for Self-Monitoring in the Workplace: Studies in Software Development. Proc. ACM Hum.-Comput. Interact. 1,
2, Article 79 (November 2017), 24 pages.
The collective behavior of knowledge workers at their workplace impacts an organization’s cul-
ture [
], success [
] and productivity [
]. Since it is a common goal to foster productive behavior
at work, researchers have investigated a variety of factors and their influence on knowledge workers’
This work was funded in part by Microsoft, NSERC and SNF. Authors’ addresses: A. N. Meyer, T. Fritz, Department of Infor-
matics, University of Zurich, Zurich, Switzerland, email:,; G. C. Murphy, T. Fritz, Depart-
ment of Computer Science, University of British Columbia, Vancouver, Canada, emails:,;
T. Zimmermann, Empirical Software Engineering Group, Microsoft Research, Redmond, US, email:
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the
full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from
© 2017 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:2 A. Meyer et al.
Phase 2: Evaluation of Design Elements
Design Recommendations
A.1 High-level overviews and interactivefeatures to drill-
down into details best support retrospecting onwork
A.2 Interest in a large and diverse set of measurements
and correlations within the data
B.1 Experience sampling increases the self-awareness
and leads to richer insights
B.2 Reflecting using the retrospection creates new
insights and helps to sort-out misconceptions
C.1 Natural language insights are useful to understand
multi-faceted correlations
C.2 Insights need to be concrete and actionable
to foster behavior change
Design Elements
A. Supporting various individual needs
B. Active user engagement
C. Enabling more multi-faceted insights
Related Work
Initial Survey
Phase 1: Identification of Design Elements
Field Study
using WorkAnalytics as
a technology probe
used in informed
Fig. 1. Summary of the Two-Phase Study Describing the Process.
behavior and productivity, including the infrastructure and office environment [
], the interrup-
tions from co-workers [
], and the teams’ communication behaviors [
]. Yet, knowledge
workers are often not aware of how their actions contribute to these factors and how they impact both
their own productivity at work and the work of others [56].
One way to improve knowledge workers’ awareness of their own behavior and foster productive
behavior is to provide them with the means to self-monitor and to reflect about their actions,
for example through visualizations [
]. This type of self-monitoring approach has been shown
to foster behavior change in other areas of life, such as physical activity (e.g., [
]), health
(e.g., [
]) and nutrition (e.g., [
]). Existing efforts to map the success of these self-monitoring
approaches to the workplace have largely focused on tracking and visualizing data about computer
use [
]. Although research has shown that self-monitoring at work can be valuable in
increasing the awareness about a certain aspect of work, such as time spent in applications [
] or
distracting activities [
], little is known about knowledge workers’ expectations of and experience
with these tools [
]. The lack of research about what knowledge workers’ need from these tools
may be one reason why many existing solutions have a low engagement and only short-term use
overall [
]. Furthermore, most of these approaches did not consider collaborative aspects of
work, such as instant messaging, email or meetings.
We address these gaps by aiming to better understand what information and features knowledge
workers expect in workplace self-monitoring tools. To make our investigations tractable, we focus
on one community of knowledge workers, software developers, before generalizing to a broader
range of knowledge workers in the future. We study software developers due to their extensive use of
computers to support both their individual and collaborative work, including the use of issue trackers
for collaborative planning [
], code review systems for shared feedback gathering [
], and
version control systems for co-editing artefacts [
]. Software developers are also an attractive target
given the frequent interest of this community to continuously improve their work and productivity [
]. Furthermore, software developers pursue a variety of different activities at work [
] that
vary considerably across work days and individuals [
]. For our investigations, this combination of
diversity in activity, similarity in domain and extensive use of a computers yields an ideal combination
for considering self-monitoring in the workplace.
To determine a set of design recommendations for building workplace self-monitoring tools, we
followed a mixed-method approach, which is summarized in Figure 1. Phase 1 of our approach
started with an investigation of software developers’ expectations of and requirements for measures to
self-monitor their work. A review of related work indicated barriers that have been identified towards
the adoption of self-tracking technologies at the workplace, including not fully understanding users’
needs [
], not knowing in what measures users are interested in [
], and not providing users
with a holistic understanding of their work behavior [
]. To overcome barriers associated
with appropriate measures, we analyzed previous work on measures of software development
productivity (e.g., [
]) and designed and developed a prototype, called WorkAnalytics
, that
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:3
captures software development measures, allows software developers to self-monitor their work
patterns and provides a retrospective view to a developer of their work day and work week.
We received feedback on the prototype through a pilot study with 20 participants and 5 iterations.
Based on what we learned from the pilots, we conducted a study to learn about the design elements,
including measures, needed in a self-monitoring tool for software development. We received input
from 413 software development professionals for the survey. An analysis of the pilot and survey data
indicated three design elements needed to build soft-monitoring tools for a workplace: A) supporting
various individual needs for data collection and representation, B) enabling active user engagement,
and C) enabling more insights on the multi-faceted nature of work.
In phase 2, we then refined the prototype to accommodate these design elements and conducted
a field study involving 43 professional software developers using the refined prototype for three
weeks. The refined prototype, which we refer to as WorkAnalytics, captures information from various
individual aspects of software development work, including application use, documents accessed,
development projects worked on, websites visited, as well as collaborative behaviors from attending
meetings, and using email, instant messaging and code review tools. In addition, WorkAnalytics
prompts a user to reflect on their work periodically and to-self report their productivity based on
their individual definition. To enable more multi-faceted insights, the captured data is visualized in a
daily retrospection (see Figure 2), which provides a higher-level overview in a weekly summary, and
allows users to relate various data with each other.
From the field study, we derived six design recommendations, summarized in Figure 1. For
instance, we learned that a combination of self-reflection on productivity using self-reports, and
observations made from studying the insights in the retrospection enhances participants’ awareness
about the time spent on various activities at work, about their collaboration with others, and about
the fragmentation of their work. In this paper, we report on these six design recommendations and
further requests made by participants for features to help them turn retrospective information into
action. For instance, participants requested recommendation tools to help them better plan their work,
improve their team-work and coordination with others, block out interruptions, and increase their
This paper provides the following main contributions:
It demonstrates that self-monitoring at work can provide novel insights and can help to sort out
misconceptions about work activities, but also highlights the need for information presented to
be concrete and actionable, rather than simply descriptive.
It demonstrates the value of brief and periodic self-reports to increase awareness of work and
productivity for software developers.
It presents a set of measurements specific to software development that professional software
developers report to provide the most value to increase awareness of their work, ranging from
the time spent doing code reviews to the number of emails received in a work day.
This paper is structured as follows: We first discuss related work before we present how we
identified design elements for self-monitoring in the workplace, and how we incorporated and
evaluated them using WorkAnalytics as a technology probe. Subsequently, the findings and distilled
design recommendations are presented. Finally, we discuss our findings with respect to long-term
user engagement, potential impact on individuals and the collaboration with their teams, and the
generalizability of our results.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:4 A. Meyer et al.
This section provides background on the rise of approaches for self-monitoring various aspects of
life and work, and barriers towards the adoption of these self-tracking technologies.
2.1 Self-Monitoring to Quantify Our Lives
In the health domain, wearable self-monitoring devices have proliferated in recent years thanks to their
miniaturization [
], and are used for tracking physical activity [
], emotional states [
stress [
], sleep [
] and diets [
]. This self-monitoring and reflection leads to increased
self-awareness, which helps to realize bad habits in behavior [14], and often promotes deliberate or
unconscious behavior changes [
], so called reactivity effects [
]. For example, physical
activity trackers, such as the Fitbit [
], motivate users to a more active and healthy life-style [
The Transtheoretical Model (TTM) [
], a well-established theory of behavior change processes,
describes behavior change as a sequence of stages which are run through until a behavior change
happens and can be maintained. Self-awareness is one of the processes that lets people advance
between stages. In particular, it helps people to move from being unaware of the problem behavior
(precontemplation stage) to acknowledging that the behavior is a problem and the intention to improve
it (contemplation stage). Self-monitoring tools have been shown to help create an understanding of
the underlying causes of problematic behavior, to point to a path towards changing the behavior to a
more positive one, and to help maintain and monitor the behavior change (e.g. [
]). Researchers
also evaluated the social aspects of self-monitoring systems and found that the sharing of data with
acquaintances or strangers can be a powerful and durable motivator, but raises privacy concerns due
to the sensitivity of the shared data [26, 66].
With our work, we aim to investigate how we can map the success of these approaches to software
developers’ work, and learn more about their expectations of and experience with self-monitoring
tools for the workplace and the impact they may have on productivity and behavior.
2.2 Designing and Evaluating Self-Monitoring Tools for Work
In addition to work on quantifying many aspects of a person’s life, there is a growing body of HCI
research that focuses on quantifying aspects of work and promoting more productive work behaviors
with self-monitoring techniques. Many of these approaches focus on the time spent in computer
applications [
], the active time on the computer [
], or work rhythms [
Some approaches specifically target the activities of software developers in integrated development
environments (e.g., Codealike [
], WatchDog [
] and Wakatime [
]). Few of these tools have been
evaluated (e.g., [
]), limiting our knowledge of the overall value of these tools to users,
particularly limiting our knowledge of which information is of value to users and if the approaches
can affect the behaviour of users. As described by Klasnja et al. [
], it is often feasible to evaluate
the efficacy of a self-monitoring tool in a qualitative way to identify serious design issues early,
while still seeing trends in how behaviour might change in the long-term. In this paper, we follow
this recommendation, focusing on facilitating the reasoning and reflection process of a knowledge
worker by increasing self-awareness about the monitored aspect of work [
]. We leave an
assessment of whether the design recommendations we provide can be embodied in a tool to change
user behaviour to future work.
To provide a starting point for building self-monitoring tools targeting software developers at work
and evaluate their potential impact on behaviors at work, we conducted a three-week user study
to investigate the efficacy of the design elements that we identified from related work, five pilots,
and a survey, using WorkAnalytics as a technology probe. To our knowledge, this is also the first
approach that focuses to raise developers’ awareness about their collaborative activities, such as
gaining insights about emailing, meeting, and code reviewing.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:5
Previous research has also discovered that users rarely engage with the captured data, resulting in
a low awareness and reducing chances for a positive behavior change when using a self-monitoring
tool [
]. We compiled and categorized a list of barriers related work has identified towards
the adoption of self-monitoring technologies at the workplace:
Not understanding user needs:
Research has shown that knowledge workers’ needs for monitor-
ing their computer use vary and that little is actually known about the measures they are interested
in [
]. Users sometimes also have too little time for a proper reflection of the data, or an
insufficient motivation to use the tool, which is likely one reason they often stop using it after some
time [
]. This emphasizes the importance of understanding users’ needs and expectations about
how self-monitoring tools should work and what measures they should track, to increase the chance
people are trying such a tool and using it over extended periods.
Lack of data context:
Most tools we found miss the opportunity to provide the user with a more
holistic understanding and context of the multi-faceted nature of work, as they only collect data about
a single aspect, e.g., the programs used on the computer [
]. This makes it difficult for users to
find correlations between data sets and, thus, limits the insights they can get. Behavior change cannot
be modelled based on just a few variables, as the broader context of the situation is necessary to better
understand the various aspects influencing work behavior and productivity [
]. To overcome this,
Huang et al. [
] propose to integrate these self-monitoring approaches into existing processes or
tools and place them into an already existing and well-known context which makes it easier for users
to engage in an ongoing tool use. Choe et al. [
] further suggest to track many things when users
first start a self-monitoring initiative, and then let them decide which measures are necessary for their
context to reflect and improve their behavior.
Difficulties in interpreting the data:
Choe et al. [
] and Huang et al. [
] argue how difficulties
in making sense of, organizing or interpreting the data result in a lower adoption of self-monitoring
approaches, as users will stop using them. For example, Galesic and Garcia-Retamero [
] found
that more than 40% of Americans and Germans lack the ability to understand simple graphs, such as
bar or pie charts, which could be a problem for self-monitoring tools as they often visualize the data.
To overcome this issue, Bentley and colleagues [
] propose to provide insights from statistically
significant correlations between different data types in natural language, which helped participants
in the study to better understand the data. Another problem to efficiently interpret data in personal
informatics systems is information overload, as described by Jones and Kelly [
]. They found
that users generally have a higher interest in multi-faceted correlations (correlations between two
distinct data categories), rather than uni-faceted correlations, that reveal “surprising” and “useful”
information. Hence, this could help to reduce information overload and provide more relevant insights
to users.
Privacy Concerns:
Another potential pitfall of self-monitoring tools is data privacy, as many
users are afraid the data might have a negative influence on their life, such as fearing their managers
may know how well they sleep, or that their insurance agency can track their activity. Most privacy
concerns can be reduced by letting users decide what and how they want to share their data, by
obfuscating sensitive data when it is being shared, by abstracting visualizations, and by letting users
opt-out of applications when they think the gained benefits do not outweigh the privacy risks [
Besides learning more about software developers’ expectations of and experience with a self-
monitoring tool for work and productivity, we used our iterative, feedback-driven development
process and a survey to investigate how these barriers could be tackled. Based on the findings, we
incorporated the identified design elements into our self-monitoring approach WorkAnalytics and then
used it to evaluate how the design elements affect developers’ awareness on work and productivity.
Subsequently, we distilled design recommendations for building self-monitoring tools for developers’
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:6 A. Meyer et al.
# Developers
Pilots 20 2-4 work weeks
Pilot 1 6 A ca. 3000 Canada 2 work weeks
Pilot 2 2 B ca. 150 Canada 2 work weeks
Pilot 3 3 C 4 Switzerland 2 work weeks
Pilot 4 5 D ca. 50000 USA 4 work weeks
Pilot 5 4 A ca. 3000 Canada 3 work weeks
Initial Survey 413 D ca. 50000 USA sent out 1600 invitations
# Developers
Field Study 43 D ca. 50000 USA 3 work weeks
Email Feedback 34 arbitrarily during the study
Intermed. Feedback Survey 26 after the first week
Data Upload 33 at the end of the study
Final Survey 32 following the data upload
# Partic.
Phase 1: Identification of Design Elements for Self-Monitoring at Work
(iterative, feedback-driven development of WorkAnalytics)
Phase 2: Evaluation of the Design Elements for Self-Monitoring at Work
(using WorkAnalytics as a technology probe)
# Partic.
Table 1. Overview of the Two-Phase Study Describing the Method, Participants, their Employer and
To identify design elements for building personalized awareness tools for self-monitoring software
developers’ work, we defined the following research question:
What information do software developers expect and need to be aware of and how should this
information be presented?
To answer this research question, we first reviewed literature of design practices applied in existing
self-monitoring tools and of measures that software developers are interested in. We also studied
the barriers related work has identified towards the adoption of self-tracking technologies at the
workplace, as described in the previous section. Based on our review, we defined design elements
and incorporated them into our own self-monitoring prototype for work, called WorkAnalytics
We then studied software developers’ use of and experience with WorkAnalytics
at work, and
refined the design elements and tool based on feedback we received through five pilots and a survey.
In what follows, we describe the goals, method and participants of this first phase. Table 1 shows
an overview of the pilots and survey that we conducted and situates them within the whole study
procedure. The supplementary material [
] contains a list of questions for all surveys and interviews
that we conducted as well as screenshots of how WorkAnalytics
looked like at various stages until
the final version.
3.1 Pilots
To examine the features and measurements software developers are interested in and engage with for
self-monitoring their work from using them in practice, rather than from doing this hypothetically
through an interview or survey, we conducted a set of pilots. Our method has strong similarities to the
Design Based Research process, where the focus is an iterative analysis, design and implementation,
based on a collaboration between practitioners and researchers in a real-world setting that leads to
design principles in the educational sector [
]. First, we implemented a self-monitoring prototype,
, incorporating visualizations of work-related measures that we identified to be of
interest to software developers in previous research from running a survey with 379 participants [
We then conducted a total of five pilots at four companies (see Phase 1 in Table 1 for more details).
For each pilot, we had a small set of software developers use WorkAnalytics
in situ, gather their
feedback, and use it to refine and improve the prototype before running the next pilot. Each pilot study
ran between 2-4 work weeks. To gather feedback, we conducted interviews with each participant at
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:7
the end of the pilot period. These interviews were semi-structured, lasted approximately 15 minutes,
and focused on what participants would like to change in the application and what they learnt from
the retrospection. To find and address problems early during the development, we also conducted
daily 5 minute interviews with each participant during the first three pilots. In these short interviews,
we gathered feedback on problems as well as changes they would like to be made to the application,
and feedback on the visualizations, their representativeness of participants’ work and their accuracy.
Throughout this phase, we rolled out application updates with bug-fixes, updated visualizations and
new features every few days. We prioritized user requests based on the feasibility of implementation
and the amount of requests by participants. After 5 pilots we decided to stop since we did not gather
any more new feedback and the application was running stable.
For the pilots, we used personal contacts and ended up with a total of 20 professional
software developers, 1 female and 19 male, from four different companies of varying size and domains
(Table 1). 30% reported their role to be a team lead and 70% an individual contributor—an individual
who does not manage other employees. Participants had an average of 14.2 years (
9.6, ranging
from 0.5 to 40) of professional software development experience.
3.2 Initial Survey
Following the pilot studies, we conducted a survey 1) to examine whether the measures and features
that developers are interested in using for self-monitoring within the target company (company
D) overlap with what we had implemented, 2) to learn how the WorkAnalytics
needed to be
adapted to fit into the target company’s existing technology set-up and infrastructure, as well as 3)
to generate interest in participating in our field study. In the survey, we asked software developers
about their expectations and the measurements that they would be interested in for self-monitoring
their work. We advertised the survey at company D, sending invitation emails to 1600 professional
software developers. To incentivize participation, we held a raffle for two 50 US
Amazon gift
certificates. The initial survey questions can be found in the supplementary material [
]. To analyze
the survey, we used methods based on Grounded Theory [
] to analyze the textual data that we
collected. This included Open Coding to summarize and label the responses, Axial Coding to identify
relationships among the codes, and Selective Coding to factor out the overall concepts, related to
what measurements and features participants expect and how their work environment looks like.
From the 1600 invitation emails, we received responses from 413 software devel-
opers (response rate: 25.8%), 11% female, 89% male. 91.5% of the participants reported their role
to be individual contributor, 6.5% team lead, 1 manager (0.2%), and 1.8% stated they are neither.
Participants had an average of 9.6 years (
7.5, ranging from 0.3 to 36) of professional software
development experience.
To answer our first research question (
), we analyzed related work, investigated developers’
experience with pilots of WorkAnalytics
and analyzed the initial survey. The analysis showed
that a design for a work self-monitoring approach should: A) support various individual needs, B)
foster active user engagement, and C) provide multi-faceted insights into work. We incorporated
these three design elements into a technology probe, WorkAnalytics.
WorkAnalytics was built with Microsoft’s Dot.Net framework in C# and can be used on the
Windows 7, 8 and 10 operating system. We created WorkAnalytics from the ground up and did not
reuse an existing, similar application, such as RescueTime [
], as we wanted to freely extend and
modify all features and measurements according to our participants’ feedback. A screenshot of the
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:8 A. Meyer et al.
Fig. 2. Screenshot of the Daily Retrospection in WorkAnalytics.
main view of the application, the retrospection, is shown in Figure 2. We open-sourced WorkAnalytics,
opening it up to contributions on GitHub 1.
4.1 A: Supporting Various Individual Needs
Measurement Needs.
The analysis of our initial survey showed that participants are generally
interested in a large number of different measures when it comes to the self-monitoring of work.
We asked survey participants to rate their interest in a list of 30 work related measures on a five
point Likert-scale from ‘extremely interesting’ to ‘not at all interesting’. We chose these measures
based on our findings from the pilot phase, on what we were capable to track, and on related work.
The list includes measures on time spent in programs, meetings, and specific activities, the amount
of code written, commits done, code reviews completed, emails sent and received, and the amount
of interruptions experienced and focus at work. Each measure had at least 20% and up to 74% of
the participants that rated it as very or extremely interesting. At the same time the combination
of measures that each participant was interested in varied greatly across participants. For instance,
only 6 of the 30 measures were rated as very or extremely interesting by 60% or more, and 52% of
participants were interested in nearly all measures while 25% only wanted very few measures for
self-monitoring at work. Overall, the greatly varying interest and the interest in a large number of
measures for self-monitoring supports earlier findings by Meyer et al. [
] in the work domain and
Choe et al. [
] in the activity and health domain. The complete list of the 30 work related measures,
including participants’ ratings about their interest in the measures, can be found in the supplementary
material [65].
To support these individually varying interests in work measures, we included a wide variety of
measures in our application and allowed users to individually select the measures that were tracked
and visualized. To capture the relevant data for these measures, WorkAnalytics features multiple data
trackers: the Programs Used tracker that logs the currently active process and window titles every
time the user switches between programs or logs ‘idle’ in case there was no user input for more than
2 minutes; the User Input tracker, to collect mouse clicks, movements, scrolling and keystrokes (no
key-logging, only time-stamp of any pressed key); and, the Meetings and Email trackers, to collect
data on calendar meetings and emails received, sent and read, using the Microsoft Graph API of the
Office 365 Suite [4].
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:9
The initial version only included the Programs Used tracker, similar to RescueTime [
]. The
Programs Used tracker allows the extraction of a multitude of measurements participants wished,
including the time spent in specific programs and activities, such as development related activities (e.g.
coding, testing, debugging, version control, and development projects worked on) and researching
the web, as well as specific code files and documents worked on and websites visited. After the first
two pilots, the User Input tracker was added, since 3 of the first 8 participants were interested in
knowing when they were producing (e.g. typing on the keyboard) and consuming (e.g. scrolling
through text with the mouse) data. Running the initial survey highlighted participants’ interest in
knowing more concrete details about their collaborative activities, such as planned and unplanned
meetings (41%), reading and writing emails (44%), and doing code reviews (47%), which is the
reason they were added to the final version of WorkAnalytics before running the field study.
Privacy Needs.
A re-occurring theme during the pilots and initial survey was participants’ need
to keep sensitive workplace data private. Participants feared that sharing data with their managers or
team members could have severe consequences on their employment or increase pressure at work.
To account for privacy needs at work, WorkAnalytics stores all logged data only locally on the user’s
machine in a local database, rather than having a centralized collection on a server. This enables users
to remain in control of the captured data. To further support the individual needs, the application
provides actions to enable and disable data trackers manually, pause the data collection and access
(and alter) the raw dataset, which was done by two participants during the field study.
4.2 B: Active User Engagement
To be able to generate deeper insights on a user’s work and productivity and encourage users to
actively reflect upon their work periodically, we decided to include a self-reporting component.
Several participants of our initial survey stated interest in self-reporting some data about work that
cannot be tracked automatically, in particular more high-level measures on productivity. Furthermore,
related work found that users rarely engage by themselves with data captured in a self-monitoring
tool, which reduces awareness and chances of positive change [
]. To address this point, we
added a pop-up to our application that appeared periodically, by default once per hour
, and prompted
users to self-report their perceived productivity, the tasks they worked on, the difficulty of these tasks
and a few other measures. During the first two pilots of our iterative development phase, we found
that while the self-reporting might be valuable, it took participants several minutes to answer, and
45% of our participants reported it to be too intrusive, interrupting their work, and decreasing their
productivity. As a result, many participants regularly postponed the pop-up or disabled it, which then
resulted in less meaningful observations to be presented in the visualization and a smaller satisfaction
by participants.
To minimize intrusiveness, yet still encourage periodic self-reflection, we reduced the number of
questions in the pop-up to a single question that asks participants to rate their perceived productivity
on a 7 point Likert-scale (1: not at all productive, 7: very productive) once per hour. Participants
were able to answer the question with a single click or keystroke. See Figure 3 for a screenshot
of the pop-up. In case the pop-up appeared at an inopportune moment, participants were able to
postpone it for a few minutes, an hour or a whole work day. To further adapt the self-reports to
individual preferences, each participant was able to alter the interval at which pop-ups appeared or
disable/enable it.
This interval was chosen as a way to balance intrusiveness. While the first two pilots had an interval of 90 minutes which
made it harder for participants to remember what exactly happened in that period, most participants preferred to reflect on
their productivity once an hour.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:10 A. Meyer et al.
Fig. 3. Screenshot of the Self-Reporting Pop-Up to Collect Perceived Productivity Data and Engage
4.3 C: Enabling More Multi-Faceted Insights
Related work found that self-monitoring tools often fail to provide sufficient contextual information
and a more holistic picture of the monitored behavior that also allows the user to relate the data [
]. Similarly, 35% of pilot study participants asked for weekly summaries to get a more complete
picture of the data and a way to compare and relate different work days or weeks with each other. In
the initial survey, 41% of the participants wished for a visualization to drill down into the data and
learn where exactly they spend their time.
To address this requirement of enabling a more complete picture of the data in our application, we
focused on three aspects: providing sufficient contextual information, allowing to get a higher-level
overview, and providing ways to relate various data with each other. To provide sufficient contextual
information, we added several visualizations to the daily retrospection that illustrate how the time of
a work day was spent:
Top Programs Used:
Pie chart displaying the distribution of time spent in the most used
programs of the day (Figure 2A).
Perceived Productivity:
Time line illustrating the user’s self-reported productivity over the
course of the day (Figure 2B).
Email Stats:
Table summarizing email related data, such as number of emails sent & received
in a work day (Figure 2C).
Programs & Productivity:
Table depicting the seven most used programs during the day and
the amount of time the user self-reported feeling productive versus unproductive while using
them (Figure 2D).
Time Spent:
Table showing a detailed break-down of how much time was spent on each
information artefact during the work day, including websites visited, files worked on, emails
sent/read, meetings in the calendar, as well as code projects and code reviews worked on
(Figure 2E).
Active Times:
Line chart visualizing the user’s keyboard and mouse input over the course
of the day. We aggregated the input data by assigning heuristic weights to each input stream
that we determined based on our own experience and trials in pilots, e.g. one mouse click has
approximately as much weight assigned as three key strokes (Figure 2F).
Longest Time Focused:
Minutes that a user spent the longest inside any application without
switching (Figure 2G).
For a higher-level overview, we added a weekly summary of the data, which shows how often
which programs were used on each day of the week, the average self-reported productivity per day,
and the productive versus unproductive time spent on the 7 most used programs during the week
(same as Figure 2E). The supplementary material [
] contains a screenshot and description of the
weekly retrospection.
Finally, to ease the correlation of data, as desired by 19% of the participants in the initial survey,
we implemented a feature that allows users to pick days or weeks (Figure 2H) and compares
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:11
them with each other side-by-side and we provide a view that correlates the most used programs
during a day with productivity (Figure 2D). In addition to these features, we automatically generated
personalized insights. Personalized insights are automatically generated aggregations and correlations
within the captured data and presented in natural language. These personalized insights are similar
to the correlation and presentation of data that Bentley et al. [
] have shown to increase users’
understanding of complex connections in the area of health-monitoring and well-being. To create
the personalized insights, we first created a matrix where we correlated each measure with itself
(i.e. average per day), with the time of the day (i.e. morning/afternoon), and with the productivity
self-reports. To avoid information overload, we just selected insights that might be interesting to users
by discarding simple insights from the matrix that were already easily perceptible in the retrospection
(e.g. the number of emails sent per day or user input over the day) and removed one insight that
we could not produce due to the format of the collected data (number of emails sent/received over
the day). For each pair, we created one or more sentences that correlate the items with each other.
For example, from the pair ’self-reported productivity’ and ’time of day’, we created the sentence:
“You feel more productive in the [morning/afternoon]” (insight 14). Three of these personalized
insights address the participants’ focus, which is an abstracted measure for the time spent in a single
program before switching to another program. Participants were aware of the definition of focus, as
one of the visualizations in the daily retrospection used the same abstraction and included a definition
(Figure 2G). We created these personalized insights individually for each user and filtered the ones
that were not feasible, e.g. due to participants disabling certain data trackers. Since we wanted to
ensure to collect sufficient data before generating these personalized insights and also ensure that
they are reasonable, we only included them in the final survey, after users shared their data logs with
us. Table 3 presents a list of the 15 personal insights that resulted from this process. The matrix we
created to select these insights is available and discussed in the supplementary material [
]. Future
versions of WorkAnalytics will include the automatic generation of such personalized insights.
To evaluate the design elements, and learn how software developers are using and appreciating the
identified features and measurements in practice, we formulated a second research question:
How do software developers use the measurements and features based on the identified design
elements during their work and what is their impact?
To answer the research question, we conducted a field study with WorkAnalytics as a technology
probe that implements the previously discussed design elements.
5.1 Participants
We recruited participants for this study by contacting the 160 software developers at company D that
took our initial survey and indicated their interest in participating. 33 of the 43 participants that signed
the consent form were recruited through this follow-up email, and 10 participants were recruited
through recommendations from other participants. The only requirements for participating in the
study were to be a software developer and to be using a work machine with the Windows operating
system. Participants were given two 10 US
meal cards at the end of the study for compensating their
efforts and were promised personalized insights into their work and productivity. All 43 participants
are professional software developers working in the same large software company (company D in the
pilots), three of them were female and 40 male. The roles, team sizes and projects varied across the
participants. 96.7% stated their role to be an individual contributor and 3.3% team lead. Participants
had an average of 9.8 years (
6.6, ranging from 0.5 to 30) of professional software development
experience. To avoid privacy concerns, we identified participants with a subject id and therefore
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:12 A. Meyer et al.
could not link their responses between the different feedback surveys, emails, and collected data from
WorkAnalytics. To get feedback on the usefulness of the different design elements from different
perspectives, we picked participants with and without previous experience with other self-monitoring
tools, such as Fitbit [24] or RescueTime [58].
5.2 Procedure
We designed this field study to last three work weeks. At the beginning of the period, we provided
participants with detailed information on the study procedure, the data we were going to collect, and
the features of WorkAnalytics. We then asked participants to install the application on their work
machine, continue their regular work day and answer the periodic self-reports when they appeared,
by default every 60 minutes. We asked them to contact us via email at any point in time in case they
run into an issue, had questions, or suggestions, which 34 participants did once or more. At any
point throughout the study, participants were able to change the time period or disable the pop-up
completely. Participants could also enable or disable any trackers that logged data for presentation in
the retrospection. After the first week, we sent out a short, intermediate feedback survey to collect
early feedback on the usefulness, suggestions for improvement, and participants’ engagement with
WorkAnalytics. 26 participants responded. The timing was chosen to make sure participants had
used the application for at least 3 to 5 work days, and the tool had captured enough data to show
visualizations from various work days.
Shortly before the end of the three work weeks of the study, we asked participants to share the
data that WorkAnalytics logged on their machine—the reported productivity ratings and the computer
interactions—if they were willing to. We also gave each participant the opportunity to obfuscate
any sensitive or private information that was logged, such as window titles or meeting subjects,
before uploading the data to our secured server. Of the 43 participants, 33 participants shared their
data with us, and three of them obfuscated the data before the upload. Due to the sensitivity of
the collected data, we did not try to convince participants to share the data and just mentioned
the additional insights they would receive when sharing it. We then used the data to automatically
generate aggregations and correlations within an individual participant’s data, which we will call
personalized insights in the following. At the end of the study period, we asked participants to fill
out a final survey, independently of whether they uploaded the data or not. The survey contained
questions on feature usage and usefulness, possible improvements, potential new measures, and
perceived changes in awareness about work and behavior. For participants that shared the collected
data with us, the survey also presented the personalized insights, automatically generated for each
participant, and questions about them. 32 of the 43 participants participated in the final survey,
including 5 that had not previously shared their computer interaction data. The questions from the
intermediate survey and final survey can be found in the supplementary material [65].
5.3 Data Collection and Analysis
Throughout the field study, we collected qualitative and quantitative data from participants. In
particular, the responses to the intermediate feedback survey, final survey, feedback received via
email, and the data that WorkAnalytics collected. Similar to our approach in the initial survey, we
used methods common in Grounded Theory. In this case, the Axial Coding step was also used to
identify higher level themes after Open Coding each feedback item separately. Besides creating
personalized insights from the collected computer interaction data, we used it to analyze participants’
engagement with the retrospection and the answering of the experience sampling productivity pop-up.
The computer interaction data span over a period of between 9 and up to 18 work days (mean=13.5,
2.6). The findings of the analysis of the quantitative and qualitative data from our participants are
discussed, and then distilled into design recommendations in the next section.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:13
To answer the second research question (
), we focus our analysis of the data collected about
the use of WorkAnalytics. For each part, we first present the findings before summarizing the design
recommendations that we inferred from interpreting the results. The design recommendations are
mapped to one of the three design elements (A to C) and are presented in blue boxes to distinguish
them from the findings.
6.1 Different Granularity of Visualizations
Most participants (70.4%) agreed that the collected data and measures were interesting and relevant
to them. Participants valued that the retrospection allowed them to get a high-level overview of the
data and also let them drill down into more detail:
Sift through all this information and quickly find what’s critical and be able to determine what is furthering one’s goals
and what [is] not (i.e. is a distraction).” - F19
Participants used, for instance, the pie chart on the programs executed (Figure 2A) and the active
times timeline (Figure 2F) to get an aggregated overview of the past work day, in particular which
activities most time was spent on and the most active and inactive times during the day, respectively.
When they wanted to further investigate their day and find out more specific details, participants
appreciated the availability of other visualizations:
I like that [WorkAnalytics] captures who I am talking with in Skype or Google Hangouts [... ]. I like the integration of
Outlook in more detail.” - F42
Several participants (F13, F17, F18) reported having used the time spent table (Figure 2E) regularly
to gain deeper insights on with whom they communicate—through email, instant messaging and
meetings—and on which artefacts they spent time—document, website, code file, or email.
Design Recommendation A.1
: For self-monitoring at work users are interested in a quick as
well as deep retrospection on their work that are best supported through high-level overviews
with interactive features to drill-down into details.
6.2 Interest in Diverse Set of Measurements
Participants had varying interests in the positive, negative or neutral framing of the data. For instance,
while some participants (F19, F25) wanted to learn about what went well, such as the tasks they
completed and how much they helped their co-workers, others were more interested in understanding
what went wrong:
[. ..] focus more on things that prevent someone from being able to add business value, rather than arbitrary metrics
like commit count, bug count, task completion, etc. [. .. ] I would prefer [the application] to track things that I felt got in
the way of being productive.” - F17
This framing effect in self-monitoring tools has recently been explored by Kim et al. [
], where
they found out that only participants with a negative framing condition improved their productivity,
while positive framing had little to no impact.
Most participants (69%) wanted WorkAnalytics to collect even more data on other aspects of their
work to further personalize and better fit the retrospection to their individual needs. For instance, they
wanted more detailed insights into collaborative and communicative behaviors by integrating data
from and sharing data with other team members (6%) and generating insights into the time spent on
technical discussions or helping co-workers (6%). Participants were further interested in collecting
data from other work devices (13%), capturing even more coding related data (6%), such as tests
and commits, or more high-level measures, such as interruptions or progress on tasks (9%). 80% of
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:14 A. Meyer et al.
the participants were also interested in biometric data, such as heart rate or stress levels, 70% were
interested in physical activity data, such as sleep or exercise, and 50% were interested in location
based data, such as commute times or visited venues; all in combination with the already collected
work data. Similarly, roughly one third of the participants suggested to extend the daily and weekly
retrospection, by adding additional visualizations and finer-grained aggregations, to better support
them in making observations based on correlations and combinations of several measurements:
[The] active times graph would be neat on the weekly retrospection so that I could get a sense of my most active time of
the day without having to navigate through each day.” - F43
These very diverse requests for extending WorkAnalytics with further measures and visualizations
emphasize the need for personalizing the experience, to increase satisfaction and engagement.
Design Recommendation A.2
: For self-monitoring one’s work, users are interested in a large
and diverse set of data, even from outside of work, as well as in correlations within the data.
6.3 Increasing Self-Awareness with Experience Sampling
Participants actively engaged in the brief, hourly self-reports on productivity when they were working
on their computer. Over the course of the study, participants self-reported their productivity regularly,
on average 6.6 times a day (
3.8, min = 1, max = 23) and it usually took them just a couple of
seconds, without actually interrupting their work. Two (6%) participants even increased the frequency
to answer the pop-up every 30 minutes, while 3 (9%) of the 33 participants, from whom we received
data, disabled the self-reports. This shows that the experience sampling method we applied was not
considered as too intrusive for most participants.
Being asked in the final survey about the value of and experience with self-reporting their produc-
tivity, 59.2% of the participants agreed or strongly agreed that the brief self-reports increased their
awareness on productivity and work (see Table 2 for more detail). The self-reports helped participants
to realize how they have spent their past hour at work and how much progress they have made on the
current task:
It makes me more conscious about where I spent my time and how productive I am.” - F08
Some participants used the pop-up to briefly reflect on whether they have used their time efficiently
or not, and if they should consider changing something:
The hourly interrupt helps to do a quick triage of whether you are stuck with some task/problem and should consider
asking for help or taking a different approach.” - F11
The fact that WorkAnalytics does not automatically measure productivity, but rather lets users self-
report their perceptions, was further valued by participants as some do not think an automated measure
can accurately capture an individual’s productivity, similar to what was previously found [53]:
One thing I like about [WorkAnalytics] a lot is that it lets me judge if my time was productive or not. So just because I
was in a browser or VisualStudio doesn’t necessarily mean I was being productive or not.” - F42
I am much more honest about my productivity levels when I have to self-report, [rather] than if the software simply [.. .]
decided whether or not I was productive.” - F15
These findings suggest that using experience sampling is a feasible method to manually collect
data as long as users have a benefit from their self-reporting.
Design Recommendation B.1
: Experience sampling in the form of brief and periodic self-reports
are valuable to users as they increase the awareness of their work and productivity, and lead to
richer insights.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:15
6.4 Increasing Self-Awareness with a Retrospection
Participants frequently accessed the daily retrospection, yet the patterns of self-monitoring varied
greatly across participants. On average, participants opened the retrospection 2.5 times per day (
min=0, max=24) for a total of 0.85 minutes (
2.95, min=0, max=42.9), but both aspects varied
a lot across participants as the standard deviation (
) and the minimum and maximum show. All
participants opened the retrospection more often in the afternoon (mean=1.9) than in the morning
(mean=0.6). Yet, 34% of participants opened the application less than 5 times over the whole study
period, while 28% used the retrospection at least once a day. Also, while 31% of participants
mostly focused on the current day, the other 69% looked and compared multiple work days. Many
participants also looked at the weekly retrospection, but access to this one was less often than to the
daily one.
While these results show that most participants were actively reflecting about their work using the
retrospection, we also received feedback from 2 participants (6%) that they sometimes forgot the
retrospection was available:
I forgot I could even look at the retrospection! A new pop-up, maybe Friday afternoon or Monday morning prompting
me to review the week’s data would be really nice.” - F14
Overall, the retrospection increased the awareness of the participating software developers and
provided valuable and novel insights that they were not aware of before. Overall, participants
commented on the retrospection providing novel insights on a variety of topics, such as how they
spend their time at work collaborating or making progress on tasks, their productivity over the course
of the day, or the fragmentation and context switches at work:
Context switches are not the same as program switches, and I do *lots* of program switches. I still do a lot more context
switches than I thought, but it doesn’t hurt my perceived productivity.” - F36
[The] tool is awesome! It [. . .] helped confirm some impression I had about my work and provided some surprising and
very valuable insights I wasn’t aware of. I am apparently spending most of my time in Outlook.” - F42
Reflecting about the time spent at work further helped participants to sort out misconceptions they
had about their work:
I did not realize I am as productive in the afternoons. I always thought my mornings were more productive but looks like
I just think that because I spend more time on email.” - F14
The survey responses that are presented in Table 2 and are based on a 5-point Likert-scale (5:
strongly agree, 1: strongly disagree) further support these findings. 81.5% of all survey participants
reported that installing and running the application increased their awareness, and 59.2% agreed or
strongly agreed that they learnt something about their work and productivity, while only 11.1% did
not. The responses also show that the retrospection helped participants in particular to learn how they
spend their time (85.2% agreed or strongly agreed) and about productive and unproductive times
Design Recommendation B.2
: Reflecting about work using a retrospective view provides novel
and valuable insights and helps to sort out misconceptions about activities pursued at work.
6.5 Personalized Insights
The personalized insights that we presented to 27 of the 32 participants in the final survey are based
on the same measurements as the ones that are visualized in the retrospection. These insights were
created based on correlations and aggregations within the collected data and presented as natural
language sentences. The specific insights are presented in Table 3 and details on their creation can be
found in Section 4.3. To learn more about the value of the visualizations and the natural language
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:16 A. Meyer et al.
Agree Neutral Disagree
The collected and visualized data is relevant to me. 18.5% 51.9% 22.2% 7.4% 0.0% 0.0%
I learned something about my own work
and perceived productivity by looking at
the retrospection and reflecting.
29.6% 29.6% 25.9% 11.1% 0.0% 3.7%
Answering the perceived productivity pop-up
questions increased my awareness about my
work and perceived productivity.
18.5% 40.7% 25.9% 7.4% 7.4% 0.0%
Installing and running the tool raised my awareness
about my work and perceived productivity.
22.2% 59.3% 11.1% 3.7% 3.7% 0.0%
I used the daily retrospection to reflect about
my past work day.
11.1% 37.0% 11.1% 29.6% 7.4% 3.7%
I used the weekly retrospection to reflect
about my past work week.
11.5% 30.8% 23.1% 23.1% 7.7% 3.8%
The retrospection helps me to learn how I
spend my time.
29.6% 55.6% 0.0% 11.1% 0.0% 3.7%
The retrospection helps me to learn more
about my perceived productive times.
25.9% 33.3% 25.9% 7.4% 3.7% 3.7%
I now know more about why and when
I feel I am productive or unproductive.
22.2% 40.7% 14.8% 18.5% 3.7% 0.0%
I tried to change some of my habits or patterns based
on what I learned from reflecting about my work.
14.8% 25.9% 11.1% 40.7% 3.7% 3.7%
Table 2. Survey Responses on Awareness Change.
insights, we asked participants to rate the novelty of each personalized insight. Participants’ responses
were mixed with respect to the novelty of the automatically generated personalized insights that
presented correlations and aggregates within the data in natural language. When rated on a scale
from ‘extremely novel’ to ‘not novel at all’, only 5 of the 15 personalized insights (see personalized
insights in Table 3 marked with an asterisk ‘*’) were rated as ‘very novel’ or ‘extremely novel’ by
more than half of the participants. This means that participants gained knowledge about most insights
either before or during the study. The five insights that were rated as ‘very novel’ or ‘extremely novel’
by more than half of the participants are all correlations between two distinct data categories, so
called multi-faceted correlations [
], rather than simple aggregates, called uni-faceted correlations,
which are easier to understand from simple visualizations [
]. One participant also suggested
to integrate these novel personalized insights into the retrospection since it was easier to draw
connections between two distinct data categories using natural-language statements, similar to what
Bentley et al. [
] found. Research by Jones and Kelly [
] has shown that multi-faceted correlations
presented by self-monitoring tools are of higher interest to users than uni-faceted correlations. Paired
with our findings above, this suggests to use visualizations for presenting uni-faceted correlations
and to present more complex multi-faceted correlations using natural language sentences. Future
work could further investigate the effectiveness of these personalized insights and their impact on
behavior at work.
Design Recommendation C.1
: Present multi-faceted correlations using natural language, as
users often miss them from reflecting with visualizations.
6.6 Potential Impact on Behavior at Work
When we explicitly asked participants if they think they actually changed their behavior during
the field study based on the insights they received from using the application, 40.7% reported that
they have changed some of their habits based on what they learnt from reflecting about their work.
Participants mentioned to be trying to better plan their work (6%), e.g. by taking advantage of their
more productive afternoons, trying to optimize how they spend their time with emails (13%), or
trying to focus better and avoid distractions (19%).
40.7% of the participants self-reported that they did not change their behavior, either because they
did not want to change something (6%) or they were not sure yet what to change (13%). The latter
ones mentioned that they needed more time to self-monitor their current behavior and learn more
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:17
extremely very
not yes no dk
1. The program you spend most time is X, followed by Y. 4.0% 16.0% 36.0% 44.0% 24.0% 68.0% 8.0%
2. The program you switch to the most is X. 11.5% 30.8% 34.6% 23.1% 23.1% 65.4% 11.5%
3. You spend X% of the time on your computer in program X, Y, and Z. 0.0% 32.0% 32.0% 36 .0% 28.0% 60.0% 12.0%
4. X is the program you focus on the longest. 17.4% 21.7% 26.1% 34.8% 17.4% 73.9% 8.7%
5. You feel [more/less] productive when you are focused less. *23.5% 29.4% 17.6% 29.4% 52.9% 23.5% 23.5%
6. When you feel productive, you spend more time in program X than in Y. 15.0% 20.0% 10.0% 55.0% 30.0% 45.0% 25.0%
7. When you feel unproductive, you spend more time in program X than in Y. *27.8% 22.2% 22.2% 27.8% 38.9% 38.9% 22.2%
8. You spend more time in Outlook in the [morning/afternoon] than [afternoon/morning]. 4.8% 28.6% 33.3% 33 .3% 23.8% 66.7% 9 .5%
9. You usually work more focused in the [morning/afternoon]. *26.1% 30.4% 34.8% 8.7% 52.2% 43.5% 4.3%
10. On average, you spend X hours on your computer per work day. 31.8% 18.2% 22.7% 27.3% 45.5% 40.9% 13.6%
11. You feel more productive on days you spend [more/less] time on your computer. *23.5% 35.3% 11.8% 29.4% 35.3% 64.7% 0.0%
12. You feel [more/less] productive when you send more emails. 14.3% 14.3% 42.9% 28.6% 35.7% 57.1% 7.1%
13. You feel [more/less] productive when you have more meetings. 10.0% 20.0% 50.0% 20.0% 40.0% 50.0% 10.0%
14. You usually feel more productive in the [morning/afternoon]. 8.7% 34.8% 39.1% 13 .0% 39.1% 47.8% 13.0%
15. You usually take X long breaks (15+ minutes) and Y short breaks (2-15 minutes) from your computer per day. *21.7% 52.2% 17.4% 8.7% 43.5% 47.8% 8.7%
Behavior Change
Table 3. Participants’ Ratings on the Novelty and Potential for Behavior Change of Personalized Insights.
about their habits, and that WorkAnalytics does not offer much help yet in incentivizing or motivating
them to change their behavior. In particular, participants stated that the visualizations and correlations
were not concrete and actionable enough for knowing what or how to change:
While having a retrospection on my time is a great first step, I gained [. . .] interesting insights and realized some bad
assumptions. But ultimately, my behavior didn’t change much. Neither of them have much in way of a carrot or a stick.” -
It would be nice if the tool could provide productivity tips - ideally tailored to my specific habits and based on insights
about when I’m not productive.” - F15
Several participants went on to make specific recommendations for more concrete and actionable
support to motivate behavior change. These recommendations ranged from pop-ups to encourage
more focused work, to recommend a break from work, all the way to intervening and blocking certain
applications or web sites for a certain time:
If [the tool] thinks I am having a productive day, it should just leave me alone and not ask any questions. If I am having
an unproductive day and [it] can help me overcome it (e.g. go home and get some sleep) the tool should suggest that.” -
Warnings if time on unproductive websites exceeds some amount, and perhaps provide a way for the user to block those
sites (though not forced).” - F29
When we explicitly asked participants to rate whether or not the 15 personalized insights make
them think about or plan their work differently, results indicated that most of the 15 personalized
insights are again not actionable enough to foster a behavior change (see results on the right sight of
Table 3). The five insights with the highest potential (between 40% and 52.9% of participants agreed)
are mostly related to work fragmentation and focus on work.
Design Recommendation C.2
: Self-monitoring insights often need to be very concrete and
actionable to foster behavior change at work.
This section discusses implications that emerged from our study with respect to long-term user
engagement, awareness about team-work and collaborations and, ultimately, behavior change.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:18 A. Meyer et al.
7.1 Design for Personalization
One of our goals was to find out whether the expectations of software developers for a self-monitoring
approach are similar or if they are diverging. While existing commercial self-monitoring tools to
quantify our lives, such as the Fitbit [
], offer only few options for personalization and are still
successful at motivating users to live a healthier life [
], our results on self-monitoring at work
suggest that personalization is crucial.
In the pilot studies and the field study, participants uniformly expected different measurements
to be visualized at different levels of granularity, similar to findings in other areas [
]. These
individual expectations might be explained by the very different types of tasks and work that software
developers, even with very similar job profiles, have to accomplish [
]. The ability to customize
the measurements that are being captured and how they are visualized is one way to support the
personalization. This customizability could not only foster interest in long-term usage, as data
relevant to the user is available, but could also reduce privacy concerns that software developers
might have.
While many participants were initially skeptical about self-monitoring their work, we received no
privacy complaints and most participants (33 of 43) even shared their data with us for the analysis.
Almost all participants even went one step further: after a few days of using WorkAnalytics and
becoming certain that their data is treated confidentially, they started to comment about possible
extensions and additional measures for their self-monitoring at work. This includes more insights
about their collaborative activities with other people, as discussed in more detail later in this section,
but also adding even more measurements specific to their job as software developers, such as the
commits they make to the version control tool or insights into their patterns of testing and debugging
While it might seem surprising that developers requested many development-unrelated measures
for self-monitoring their work, this can be explained by the amount of time they spend with develop-
ment related activities, on average between 9% and 21%, versus other activities, such as collaborating
(45%) or browsing the web (17%) [
]. As most study participants (84.6%) were interested to
continue using WorkAnalytics after the study had ended, we concluded that the initially identified
design elements to support various individual needs, actively engage the user, and enable more
multi-faceted insights are valuable for self-monitoring at work.
7.2 Increased Engagement through Experience Sampling
As noted in previous research, many self-monitoring approaches suffer from an extremely low user
engagement with the data [
]. For example, RescueTime, which visualizes the captured
data on a dashboard in the browser, was found to be used only a few seconds per day (mean=4.68
12.03) [
]. Similar to the reports in our field study, participants’ reasons for this low engagement
might be that users forget about the availability of the data visualizations. A simple and periodic
reminder, e.g., to let users know that there is a new summary on the work week, might increase
the engagement with these visualizations and dashboards. Recently, researchers have explored how
adding an ambient widget and presenting a summary of the captured data always visible on the
user’s screen can increase the engagement with the data (e.g., [
]). For example, the ambient
widget by Kim et al [40] increased the use of RescueTime to about a minute a day.
In this paper, we assessed another approach, namely a periodic pop-up to self-report productivity.
Our findings show that the self-report helped users to quickly reflect on how efficiently they spent their
time, which then also resulted in an increased engagement. Our results show that using experience
sampling is a feasible method to manually collect data that is difficult to capture automatically and is
(mostly) appreciated as long as users have a benefit from self-reporting, e.g. by getting additional
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:19
or more fine-grained insights. It is up to future work to determine how long the positive effects of
self-reporting or ambient widgets lasts, whether users might at some point loose interest after having
learnt ‘enough’ about their work, and whether it might be beneficial to only include these features in
certain time periods. More research is required to understand how this can be generalized to other
7.3 Actionability for Behavior Change
Most health and sports tracking systems have been shown to foster positive behavior changes due to
increased self-awareness. In our study, 40.7% of the participants explicitly stated that the increased
self-awareness motivated them to adapt their behavior. While motivating changes in behavior was
not a primary goal, the study gave valuable insights into where and how self-monitoring tools at
work could support developers in the process. The very diverse set of insights in WorkAnalytics that
participants wished for, made it more difficult to observe a specific problem behavior and define a
concrete, actionable goal for a behavior change, which is a basic requirement for starting a change
according to the theory of behavior change process TTM [
]. Rather than just enabling an increased
self-awareness, it might also be important to provide users with concrete recommendations for
active interventions and alerts when certain thresholds are reached. Participants suggested to block
distracting websites after the user spent a certain amount of time on them, or to suggest a break after
a long time without one, similar to what was recently suggested [
]. At the same time, not all
insights are actionable as developers sometimes have little power to act on an insight, similar to what
Mathur et al. found from visualizing noise and air quality at the workplace [
]. As an example, most
developers can likely not just stop reading and responding to emails. Another extension to possibly
make insights more actionable is to let users formulate concrete behavior change goals based on the
insights they make from using the retrospection and experience sampling component. For example, a
user could set a goal to regularly take a break to relax or to have an empty email inbox at the end of
the day. This goal setting component could leverage experience sampling further and learn when and
how users are interested and open to receive recommendations of how to better reach their goal.
Approaches aiming to foster long-term behavior changes need to offer means to actively monitor
and maintain a behavior change [
] and help avoiding lapses, a frequent reason for abandoning
behavior change goals [
]. In the future we plan to experiment with and evaluate these different forms
of how insights could be improved to make them more actionable, and then evaluate the longer-term
impact of WorkAnalytics on software developers’ actual behavior at work.
7.4 Benchmarking
A re-occurring feedback by participants was the wish for a way to benchmark their work behavior
and achievements with their team or other developers with similar job profiles and to improve their
work habits based on the comparisons with others, similar to what was previously described by
Wood [
]. Given the privacy concerns at work, adding such a component to the self-monitoring
for work could, however, severely increase pressure and stress for users who are performing below
average. Also, given our participants’ interest in a high variety and large set of work related measures
indicates that even within one domain—software developers in our case—users might work on fairly
different tasks and that it might be impossible to find a ‘fair’ set of measures for comparing and
benchmarking individuals. More research is needed to examine how and in which contexts such a
social feature might be beneficial as well as which aggregated measures might be used for some
sort of comparison without privacy concerns. For example, one could anonymously collect the data
related to developers’ work habits, such as fragmentation, time spent on activities, and achievements,
combine them with job profile details and then present personalized insights and comparisons to
other developers with a similar job profile. One such insight could be to let the developer know that
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:20 A. Meyer et al.
others spend more time reading development blogs to educate themselves or that they usually have
less meetings in a work week. Besides having anonymous comparisons between developers, it could
further be beneficial to let users compare their work habits with their previous self, e.g. from one
month ago, and enable them to reflect on how their behaviors change over time. Although research
has shown that benchmarking features in physical activity trackers foster competition with peers
to be more active [
], additional research is needed to determine whether they also lead to a
positive behavior change at the workplace.
7.5 Team-Awareness
Even though most insights available within WorkAnalytics appear to be about each user’s own work
habits, some insights also reveal details about the individuals’ collaboration and communication
patterns with their team and other stakeholders. These are, for example, insights about their meeting,
email, instant messaging, social networking, and code review behavior. Nonetheless, participants
were interested in even more measures, especially with respect to revealing (hidden) collaboration and
communication patterns within their teams. Having detailed insights into how the team coordinates
and communicates at work could help developers make more balanced adjustments with respect to
the impact their behavior change might have on their team. For example, being aware of co-workers’
most and least productive times could help to schedule meetings at more optimal times, similar to
what Begole suggested for teams distributed across time zones [
]. Related to an approach suggested
by Anvik et al. where work items and bug reports were automatically assigned to developers based on
previously assigned and resolved work items [
], it could be beneficial for improving the coordination
and planning of task assignments by also taking into account each developer’s current capacity and
workload. Being more aware of the tasks each member of the team is currently working on and how
much progress they are making could also be useful for managers or team leads to identify problems
early, e.g. a developer who is blocked on a task [
] or uses communication tools inefficiently [
and take appropriate action. A similar approach, WIPDash, has been shown to improve daily stand-up
meetings by providing teams with shared dashboard summaries of work items each developer was
assigned to and has completed, as these dashboards increase the awareness about each developer’s
progress on tasks [
]. Visualizing the current productivity and focus to co-workers could prevent
interruptions at inopportune moments, where resuming the interrupted task might be more costly than
at a moment of low focus. To streamline inopportune interruptions at work, Z
uger et al. suggested to
visualize the current focus to the team by using a “traffic light like lamp” [73].
As the envisioned additions and extensions to WorkAnalytics might increase an individual’s
productivity, they might negatively affect the overall team productivity or the collaboration within
teams. For example, a developer who is stuck on a task cannot ask a co-worker for help that blocks
out interruptions. This is why self-monitoring tools for teams at work could not only motivate a
collective improvement of the team-productivity, but also help to monitor the success and impact of
these changes on other stakeholders. Future work could explore how self-monitoring at work supports
team collaboration, by analyzing collaboration practices within development teams and comparing
them to other teams. This work could be based on the Model of Regulation, recently introduced by
Mendez et al. [
], as it helps to systematically evaluate and understand how teams self-regulate their
own tasks and activities, other team-members, and how they create a shared understanding of their
project goals.
We focused our work on one type of knowledge workers, software developers, to gather insights
into one work domain before generalizing to a broader range of knowledge workers in the future.
Software developers have been referred to as the knowledge worker prototype as they are often not
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:21
only the first ones to use and tweak tools, but also have lower barriers for building and improving
tools themselves [
]. While software developers experience extensive interaction and collaboration
with co-workers through their computer use, we believe that many of the observations made from
building and evaluating WorkAnalytics with developers are also helpful for self-monitoring tools in
other work domains, especially since the studied features and most tracked measures can be re-used
in or easily ported to other domains.
The main threat to the validity and generalizability of our results is the external validity, due to
the selection of field study participants that were all from the same company and had limited gender
diversity. We tried to mitigate these threats by advertising the study and selecting participants from
different teams in the company, at different stages of their project, and with varying amounts of
experience. Participants tested WorkAnalytics over a duration of several weeks and were studied in
their everyday, real-world work environment and not in an experimental exercise. Moreover, the
development of the application was designed together with participants from three other companies
of varying size, reducing the chance that we built an application that is just useful for software
developers at one company. Although our findings shed light on how awareness and engagement
can be increased, it is not clear how WorkAnalytics affects software developers using it over longer
than the three-week period studied. We are aware that there is a certain self-selection bias towards
participants who are in general more willing to quantify various aspects of their life, and use the
collected data to increase their awareness.
One way to improve the productivity and well-being of knowledge workers is to increase their
self-awareness about productivity at work through self-monitoring. Yet, little is known about the
expectations of and experience with self-monitoring at the workplace and how it impacts software
developers, one community of knowledge workers on which we focused. Based on previous work,
an iterative development process with 5 pilot studies and a survey with 413 developers, we factored
out design elements that we implemented and refined with WorkAnalytics as a technology probe for
self-monitoring at work. We then evaluated the effect of these design elements on self-awareness of
patterns of work and productivity and their potential impact on behavior change with 43 participants
in a field study, resulting in design recommendations.
We found that experience sampling, using minimal-intrusive self-reporting, and the retrospective
summary of the data enhances the users’ engagement and increases their awareness about work
and productivity. Participants reported that by using our self-monitoring approach, they have made
detailed observations into how they spend their time at work collaborating or working on tasks, when
they usually feel more or less productive, and sort out misconceptions they had about their activities
pursued at work, such as spending a surprisingly high amount of time collaborating with others via
email. Our work provides a set of design recommendations for building self-monitoring tools for
developers’ work and possibly other types of knowledge workers. We discuss potential future work
to further increase engagement with the data and to enhance the insights’ actionability by providing
users with recommendations to improve their work, by adding social features to motivate users to
compete with their peers, and by increasing the team awareness to help teams reduce interruptions,
improve the scheduling of meetings, and the coordination of task assignments.
The authors would like to thank the study participants and the anonymous reviewers for their valuable
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:22 A. Meyer et al.
Elena Agapie, Daniel Avrahami, and Jennifer Marlow. 2016. Staying the Course: System-Driven Lapse Management for
Supporting Behavior Change. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
Jessica S. Ancker and David Kaufman. 2007. Rethinking health numeracy: A multidisciplinary literature review. Journal
of the American Medical Informatics Association 14, 6 (2007), 713–721.
John Anvik, Lyndon Hiew, and Gail C. Murphy. 2006. Who Should Fix This Bug?. In Proceedings of the 28th
International Conference on Software Engineering (ICSE ’06). ACM, 361–370.
[4] Microsoft Graph API. 2017. (2017). Retrieved July 9, 2017.
Maryi Arciniegas-Mendez, Alexey Zagalsky, Margaret-Anne Storey, and Allyson Fiona Hadwin. 2017. Using the
Model of Regulation to Understand Software Development Collaboration Practices and Tool Support. In Proceedings
of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17). ACM,
Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In
Proceedings of the 2013 International Conference on Software Engineering. 712–721.
Lyn Bartram. 2015. Design Challenges and Opportunities for Eco-Feedback in the Home. IEEE Computer Graphics
and Applications 35, 4 (2015).
James B. Begole, John C. Tang, Randall B. Smith, and Nicole Yankelovich. 2002. Work Rhythms: Analyzing Visualiza-
tions of Awareness Histories of Distributed Groups. 230 (2002).
M. Beller, I. Levaja, A. Panichella, G. Gousios, and A. Zaidman. 2016. How to Catch ’Em All: WatchDog, a Family of
IDE Plug-Ins to Assess Testing. In 2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and
Industrial Practice (SER IP). 53–56.
Frank Bentley, Konrad Tollmar, Peter Stephenson, and Levy Laura. 2013. Health Mashups: Presenting Statistical Patterns
between Wellbeing Data and Context in Natural Language to Promote Behavior Change. 20, 5 (2013), 1–27.
Ann Brown. 1992. Design Experiments: Theoretical and Methodological Challenges in Creating Complex Interventions
in Classroom Settings. Journal of the Learning Sciences 2, 2 (1992), 141–178.
e Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, and Cecilia Mascolo. 2013. Tracking Serendipitous
Interactions: How Individual Cultures Shape the Office. CoRR (2013).
[13] Rafael A. Calvo and Dorian Peters. 2014. Self-Awareness and Self-Compassion. MIT Press, 304.
Eun Kyoung Choe, Nicole B. Lee, Bongshin Lee, Wanda Pratt, and Julie a. Kientz. 2014. Understanding quantified-
selfers’ practices in collecting and exploring personal data. Proceedings of the 32nd annual ACM conference on Human
factors in computing systems (CHI ’14) (2014), 1143–1152.
Jan Chong and Rosanne Siino. 2006. Interruptions on software teams: a comparison of paired and solo programmers. In
Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work. ACM, 29–38.
[16] Codealike. 2017. (2017). Retrieved July 9, 2017.
Emily I. M. Collins, Anna L. Cox, Jon Bird, and Daniel Harrison. 2014. Social Networking Use and RescueTime: The
Issue of Engagement. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous
Computing: Adjunct Publication (UbiComp ’14 Adjunct). ACM, 687–690.
Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avrahami, Jon Froehlich, Louis LeGrand, Ryan Libby,
Keith Mosher, and James A. Landay. 2008. Flowers or a Robot Army?: Encouraging Awareness & Activity with
Personal, Mobile Displays. In Proceedings of the 10th International Conference on Ubiquitous Computing (UbiComp
’08). ACM, 54–63.
Sunny Consolvo, David W. McDonald, Tammy Toscos, Mike Y Chen, Jon Froehlich, Beverly Harrison, Predrag Klasnja,
Anthony LaMarca, Louis LeGrand, Ryan Libby, Ian Smith, and James A. Landay. 2008. Activity sensing in the wild: a
field trial of ubifit garden. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI
’08). ACM, 1797–1806.
Kate Crawford, Jessa Lingel, and Tero Karppi. 2015. Our metrics, ourselves: A hundred years of self-tracking from the
weight scale to the wrist wearable device. European Journal of Cultural Studies 18 (2015), 479–496.
Mary Czerwinski, Eric Horvitz, and Susan Wilhite. 2004. A diary study of task switching and interruptions. In
Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 175–182.
Tom DeMarco and Tim Lister. 1985. Programmer performance and the effects of the workplace. In Proceedings of the
8th international conference on Software engineering. IEEE Computer Society Press, 268–272.
Daniel A Epstein, Daniel Avrahami, and Jacob T Biehl. 2016. Taking 5: Work-Breaks, Productivity, and Opportunities
for Personal Informatics for Knowledge Workers. In Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems.
[24] Fitbit. 2017. (2017). Retrieved July 9, 2017.
[25] B. J. Fogg. 2003. Persuasive Technology: Using Computers to Change What We Think and Do. Elsevier Science.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
Design Recommendations for Workplace Self-Monitoring 79:23
Thomas Fritz, Elaine M. Huang, Gail C. Murphy, and Thomas Zimmermann. 2014. Persuasive Technology in the Real
World: A Study of Long-term Use of Activity Sensing Devices for Fitness. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’14). ACM, 487–496.
Mirta Galesic and Rocio Garcia-Retamero. 2011. Graph literacy a cross-cultural comparison. Medical Decision Making
31, 3 (2011), 444–457.
Roland Gasser, Dominique Brodbeck, Markus Degen, J
urg Luthiger, Remo Wyss, and Serge Reichlin. 2006. Persua-
siveness of a mobile lifestyle coaching application using social facilitation. In International Conference on Persuasive
Technology. Springer, 27–38.
arcio Kuroki Gon
alves, Leidson de Souza, and V
ıctor M. Gonz
alez. 2011. Collaboration, Information Seeking and
Communication: An Observational Study of Software Developers’ Work Practices. Journal of Universal Computer
Science 17, 14 (2011), 1913–1930.
Victor M. Gonz
alez and Gloria Mark. 2004. Constant, Constant, Multi-tasking Craziness: Managing Multiple Working
Spheres. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’04). ACM, 113–120.
G. Hofstede. 1994. Cultures and Organizations: Software of the Mind : Intercultural Cooperation and Its Importance
for Survival. HarperCollins.
Victoria Hollis, Artie Konrad, and Steve Whittaker. 2015. Change of Heart: Emotion Tracking to Promote Behavior
Change. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15) (2015),
Dandan Huang, Melanie Tory, and Lyn Bartram. 2016. A Field Study of On-Calendar Visualizations. In Proceedings of
Graphics Interface 2016. 13–20.
[34] Hubstaff. 2017. (2017). Retrieved July 9, 2017.
[35] Watts S Humphrey. 2000. The Personal Software Process SM (PSP SM). November (2000).
Mikkel R Jakobsen, Roland Fernandez, Mary Czerwinski, Kori Inkpen, Olga Kulyk, and George G. Robertson. 2009.
WIPDash: Work Item and People Dashboard for Software Development Teams. In Proceedings of the 12th IFIP TC 13
International Conference on Human-Computer Interaction: Part II. Springer-Verlag, 791–804.
Simon L. Jones and Ryan Kelly. 2017. Dealing With Information Overload in Multifaceted Personal Informatics Systems.
Human Computer Interaction (2017), 1–48.
Matthew Kay, Eun Kyoung Choe, Jesse Shepherd, Benjamin Greenstein, Nathaniel Watson, Sunny Consolvo, and
Julie A. Kientz. 2012. Lullaby: A Capture and Access System for Understanding the Sleep Environment. In Proceedings
of the 2012 ACM Conference on Ubiquitous Computing (UbiComp ’12). ACM, 226–234.
[39] Allan Kelly. 2008. Changing Software Development: Learning to Become Agile. Wiley.
Young-Ho Kim, Jae Ho Jeon, Eun Kyoung Choe, Bongshin Lee, Kwonhyun Kim, and Jinwook Seo. 2016. TimeAware:
Leveraging Framing Effects to Enhance Personal Productivity. In Proceedings of the 2016 CHI Conference on Human
Factors in Computing Systems (CHI ’16). 272–283.
Predrag Klasnja, Sunny Consolvo, and Wanda Pratt. 2011. How to Evaluate Technologies for Health Behavior Change
in HCI Research. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3063–3072.
Saskia Koldijk, Mark Van Staalduinen, Stephan Raaijmakers, and Wessel Kraaij. 2011. Activity-Logging for Self-
Coaching of Knowledge Workers. 0–3.
Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. Proceedings of the
28th international conference on Human factors in computing systems (CHI ’10) (2010), 557.
Ian Li, Anind Dey, and Jodi Forlizzi. 2011. Understanding my data, myself: supporting self-reflection with Ubicomp
technologies. In Proceedings of the 13th international conference on Ubiquitous computing (UbiComp ’11). 405.
Paul Luo Li, Andrew J. Ko, and Jiamin Zhu. 2015. What Makes a Great Software Engineer?. In Proceedings of the 37th
International Conference on Software Engineering - Volume 1 (ICSE ’15). IEEE Press, 700–710.
James Lin, Lena Mamykina, Silvia Lindtner, Gregory Delajoux, and Henry Strub. 2006. Fish’n’Steps: Encouraging
Physical Activity with an Interactive Computer Game. In UbiComp 2006: Ubiquitous Computing. Lecture Notes in
Computer Science, Vol. 4206. Chapter 16, 261–278.
Lena Mamykina, Elizabeth Mynatt, Patricia Davidson, and Daniel Greenblatt. 2008. MAHI: Investigation of Social
Scaffolding for Reflective Thinking in Diabetes Management. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’08). ACM, 477–486.
[48] Manictime. 2017. (2017). Retrieved July 9, 2017.
Gloria Mark, Shamsi T. Iqbal, Mary Czerwinski, Paul Johns, and Akane Sano. 2016. Email Duration, Batching and
Self-interruption: Patterns of Email Use on Productivity and Stress. In Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems (CHI ’16), Vol. 21. 98–109.
Gloria Mark, Shamsi T. Iqbal, Mary Czerwinski, Paul Johns, and Akane Sano. 2016. Neurotics Can’t Focus: An in
situ Study of Online Multitasking in the Workplace. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems. 1739–1744.
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
79:24 A. Meyer et al.
Akhil Mathur, Marc Van Den Broeck, Geert Vanderhulst, Afra Mashhadi, and Fahim Kawsar. 2015. Tiny Habits in
the Giant Enterprise: Understanding the Dynamics of a Quantified Workplace. In Proceedings of the Joint Interna-
tional Conference on Pervasive and Ubiquitous Computing and the International Symposium on Wearable Computers
(Ubicomp/ISWC’15). 577–588.
Daniel McDuff, Amy Karlson, and Ashish Kapoor. 2012. AffectAura: an Intelligent System for Emotional Memory.
e N. Meyer, Laura E Barton, Gail C Murphy, Thomas Zimmermann, and Thomas Fritz. 2017. The Work Life of
Developers: Activities, Switches and Perceived Productivity. Transactions of Software Engineering (2017), 1–15.
e N. Meyer, Thomas Fritz, Gail C. Murphy, and Thomas Zimmermann. 2014. Software Developers’ Perceptions
of Productivity. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software
Engineering (FSE 2014). ACM, 19–29.
Rosemery O. Nelson and Steven C. Hayes. 1981. Theoretical explanations for reactivity in self-monitoring. Behavior
Modification 5, 1 (1981), 3–14.
Dewayne E. Perry, Nancy A. Staudenmayer, and Lawrence G. Votta. 1994. People, Organizations, and Process
Improvement. IEEE Software 11, 4 (1994), 36–45.
James O. Prochaska and Wayne F. Velicer. 1997. The Transtheoretical Change Model of Health Behavior. American
Journal of Health Promotion 12, 1 (1997), 38–48.
[58] RescueTime. 2017. (2017). Retrieved July 9, 2017.
John Rooksby, Parvin Asadzadeh, Mattias Rost, Alistair Morrison, and Matthew Chalmers. 2016. Personal Tracking of
Screen Time on Digital Devices. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
John Rooksby, Mattias Rost, Alistair Morrison, and Matthew Chalmers. 2014. Personal Tracking as Lived Informatics.
[61] Slife. 2017. (2017). Retrieved July 9, 2017.
Margaret-Anne Storey, Leif Singer, Brendan Cleary, Fernando Figueira Filho, and Alexey Zagalsky. 2014. The
(R)Evolution of Social Media in Software Engineering. In FOSE 2014 Proceedings of the on Future of Software
Engineering. 100–116.
Margaret Anne Storey, Alexey Zagalsky, Fernando Figueira Filho, Leif Singer, and Daniel M. German. 2017. How
Social and Communication Channels Shape and Challenge a Participatory Culture in Software Development. IEEE
Transactions on Software Engineering 43, 2 (2017), 185–204.
Anselm Strauss and Juliet Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing
Grounded Theory.
[65] Link to Supplementary Material. 2017. (2017).
Tammy Toscos, Anne Faber, Shunying An, and Mona P Gandhi. 2006. Chick Clique : Persuasive Technology to Motivate
Teenage Girls to Exercise. In CHI ’06: CHI ’06 extended abstracts on Human factors in computing systems. 1873–1878.
Christoph Treude, Fernando Figueira Filho, and Uir
a Kulesza. 2015. Summarizing and Measuring Development Activity.
In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 625–636.
Christoph Treude and Margaret-Anne Storey. 2010. Awareness 2.0: Staying Aware of Projects , Developers and Tasks
using Dashboards and Feeds. In 2010 ACM/IEEE 32nd International Conference on Software Engineering. 365–374.
Bogdan Vasilescu, Kelly Blincoe, Qi Xuan, Casey Casalnuovo, Daniela Damian, Premkumar Devanbu, and Vladimir
Filkov. 2016. The Sky is Not the Limit: Multitasking on GitHub Projects. 994–1005.
[70] Wakatime. 2017. (2017). Retrieved July 9, 2017.
Steve Whittaker, Victoria Hollis, and Andrew Guydish. 2016. Don’t Waste My Time: Use of Time Information Improves
Focus. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16).
Joanne V. Wood. 1989. Theory and research concerning social comparisons of personal attributes. Psychological
Bulletin 106, 2 (1989), 231–248.
Manuela Z
uger, Christopher Corley, Andr
e N. Meyer, Boyang Li, Thomas Fritz, David Shepherd, Vinay Augustine,
Patrick Francis, Nicholas Kraft, and Will Snipes. 2017. Reducing Interruptions at Work: A Large-Scale Field Study of
FlowLight. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). 61–72.
Received May 2017; revised July 2017; accepted November 2017
Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 79. Publication date: November 2017.
... Meyer et al. [22,23] characterized the perceptions of productivity at Microsoft. In surveys of 413 developers at Microsoft, they identified six groups of developers with similar perceptions of productivity: social, lone, focused, balanced, leading, and goal-oriented developers. ...
... 2,17 wrote this łArrange seminar, talk through which current students can get idea of software industry life from alumni.ž. (7) 22,15 . Respondents expect an overall learning environment from the universities. ...
... 2,42 expects łFuture proof vision, good leading qualitiesž. (3)26,22 . 20.95% of participants want their manager to be helpful in their career opportunities by providing recognition, training, etc. ...
... Meyer et al. [22,23] characterized the perceptions of productivity at Microsoft. In surveys of 413 developers at Microsoft, they identified six groups of developers with similar perceptions of productivity: social, lone, focused, balanced, leading, and goal-oriented developers. ...
... Figure 4 summarizes the three expectation types we observed from the new hires. (1) 22,18 . We have found that the recruits are expected to focus mostly on their learning as per most of the respondents (52.94%). ...
... 2,17 wrote this "Arrange seminar, talk through which current students can get idea of software industry life from alumni.". (7) 22,15 . Respondents expect an overall learning environment from the universities. ...
Full-text available
Background: Studies on developer productivity and well-being find that the perceptions of productivity in a software team can be a socio-technical problem. Intuitively, problems and challenges can be better handled by managing expectations in software teams. Aim: Our goal is to understand whether the expectations of software developers vary towards diverse stakeholders in software teams. Method: We surveyed 181 professional software developers to understand their expectations from five different stakeholders: (1) organizations, (2) managers, (3) peers, (4) new hires, and (5) government and educational institutions. The five stakeholders are determined by conducting semi-formal interviews of software developers. We ask open-ended survey questions and analyze the responses using open coding. Results: We observed 18 multi-faceted expectations types. While some expectations are more specific to a stakeholder, other expectations are cross-cutting. For example, developers expect work-benefits from their organizations, but expect the adoption of standard software engineering (SE) practices from their organizations, peers, and new hires. Conclusion: Out of the 18 categories, three categories are related to career growth. This observation supports previous research that happiness cannot be assured by simply offering more money or a promotion. Among the most number of responses, we find expectations from educational institutions to offer relevant teaching and from governments to improve job stability, which indicate the increasingly important roles of these organizations to help software developers. This observation can be especially true during the COVID-19 pandemic.
... Research has been done to understand as well as modify computer and phone usage, e.g., encourage physical activity [3], and enable self-tracking [29] and better focus [2,23]. There has also been research on identifying user needs, e.g., self-monitoring [25] and productivity needs [10,17], and what people consider to be work-breaks [9] and what breaks are helpful for productivity [9]. ...
... There are different scales for accessing computer usage, e.g., computer use scale [27], attitudes toward computer usage scale [28], and compulsive internet use scale [24]. Researchers have also used in situ studies to investigate computerusage behavior change needs, e.g., self-monitoring [25] and break prompts that discourage sedentary behavior [20]. ...
Full-text available
Technology based screentime, the time an individual spends engaging with their computer or cell phone, has increased exponentially over the past decade, but perhaps most alarmingly amidst the COVID-19 pandemic. Although many software based interventions exist to reduce screentime, users report a variety of issues relating to the timing of the intervention, the strictness of the tool, and its ability to encourage organic, long-term habit formation. We develop guidelines for the design of behaviour intervention software by conducting a survey to investigate three research questions and further inform the mechanisms of computer-related behaviour change applications. RQ1: What do people want to change and why/how? RQ2: What applications do people use or have used, why do they work or not, and what additional support is desired? RQ3: What are helpful/unhelpful computer breaks and why? Our survey had 68 participants and three key findings. First, time management is a primary concern, but emotional and physical side-effects are equally important. Second, site blockers, self-trackers, and timers are commonly used, but they are ineffective as they are easy-to-ignore and not personalized. Third, away-from-computer breaks, especially involving physical activity, are helpful, whereas on-screen breaks are unhelpful, especially when they are long, because they are not refreshing. We recommend personalized and closed-loop computer-usage behaviour change support and especially encouraging off-the-computer screentime breaks.
... By default the popup appears on the participant's monitor once per hour. We define this interval inspired by the study design of Meyer et al. [37], who studied the developers' productivity using an analogous pop-up. Specifically, they report that 60 minutes was a good balance between the intrusiveness and the necessity of collecting as much data as possible, as also emerged during our pilot study. ...
... When the developers do not want to be interrupted, they can postpone answering the pop-up by specifying the delay in minutes. To reduce intrusiveness of the pop-up we follow the recommendations of Meyer et al. [37] and allow the participants to dismiss the pop-up for the entire day. Conversely, the participants can invoke the pop-up manually, when experiencing strong emotions that believe are important to be reported. ...
Emotions are known to impact cognitive skills, thus influencing job performance. This is also true for software development, which requires creativity and problem-solving abilities. In this paper, we report the results of a field study involving professional developers from five different companies. We provide empirical evidence that a link exists between emotions and perceived productivity at the workplace. Furthermore, we present a taxonomy of triggers for developers' positive and negative emotions, based on the qualitative analysis of participants' self-reported answers collected through daily experience sampling. Finally, we experiment with a minimal set of non-invasive biometric sensors that we use as input for emotion detection. We found that positive emotional valence, neutral arousal, and high dominance are prevalent. We also found a positive correlation between emotional valence and perceived productivity, with a stronger correlation in the afternoon. Both social and individual breaks emerge as useful for restoring a positive mood. Furthermore, we found that a minimum set of non-invasive biometric sensors can be used as a predictor for emotions, provided that training is performed on an individual basis. While promising, our classifier performance is not yet robust enough for practical usage. Further data collection is required to strengthen the classifier, by also implementing individual fine-tuning of emotion models.
... In surveys of 413 developers at Microsoft, Meyer et al. (2017a) identified six groups of developers with similar perceptions of productivity: social, lone, focused, balanced, leading, and goal-oriented developers. At the same time, Meyer et al. (2017b) analyzed the impact of self-monitoring to improve the productivity of knowledge workers. They studied 20 software developers through a user-feedback driven development approach and surveyed 413 developers to infer design elements for workplace self-monitoring. ...
Full-text available
Many software developers started to work from home on a short notice during the early periods of COVID-19. A number of previous papers have studied the wellbeing and productivity of software developers during COVID-19. The studies mainly use surveys based on predefined questionnaires. In this paper, we investigate the problems and joys that software developers experienced during the early months of COVID-19 by analyzing their discussions in online forum devRant, where discussions can be open and not bound by predefined survey questionnaires. The devRant platform is designed for developers to share their joys and frustrations of life. We manually analyze 825 devRant posts between January and April 12, 2020 that developers created to discuss their situation during COVID-19. WHO declared COVID-19 as pandemic on March 11, 2020. As such, our data offers us insights in the early months of COVID-19. We manually label each post along two dimensions: the topics of the discussion and the expressed sentiment polarity (positive, negative, neutral). We observed 19 topics that we group into six categories: Workplace & Professional aspects, Personal & Family well-being, Technical Aspects, Lockdown preparedness, Financial concerns, and Societal and Educational concerns. Around 49% of the discussions are negative and 26% are positive. We find evidence of developers’ struggles with lack of documentation to work remotely and with their loneliness while working from home. We find stories of their job loss with little or no savings to fallback to. The analysis of developer discussions in the early months of a pandemic will help various stakeholders (e.g., software companies) make important decision early to alleviate developer problems if such a pandemic or similar emergency situation occurs in near future. Software engineering research can make further efforts to develop automated tools for remote work (e.g., automated documentation).
... As Software developers, being a special type of knowledge worker [43], often find themselves on the cutting edge of technology systems, they often build tools to support their work that can later support other types of knowledge workers in technology (e.g., Emails, Internet Search, and Wiki Pages) [31]. For this reason, many have studied them as the prototype of knowledge workers-often pushing the boundaries of knowledge work [30]. ...
Developers are more than "nerds behind computers all day", they lead a normal life, and not all take the traditional path to learn programming. However, the public still sees software development as a profession for "math wizards". To learn more about this special type of knowledge worker from their first-person perspective, we conducted three studies to learn how developers describe a day in their life through vlogs on YouTube and how these vlogs were received by the broader community. We first interviewed 16 developers who vlogged to identify their motivations for creating this content and their intention behind what they chose to portray. Second, we analyzed 130 vlogs (video blogs) to understand the range of the content conveyed through videos. Third, we analyzed 1176 comments from the 130 vlogs to understand the impact the vlogs have on the audience. We found that developers were motivated to promote and build a diverse community, by sharing different aspects of life that define their identity, and by creating awareness about learning and career opportunities in computing. They used vlogs to share a variety of how software developers work and live -- showcasing often unseen experiences, including intimate moments from their personal life. From our comment analysis, we found that the vlogs were valuable to the audience to find information and seek advice. Commenters sought opportunities to connect with others over shared triumphs and trials they faced that were also shown in the vlogs. As a central theme, we found that developers use vlogs to challenge the misconceptions and stereotypes around their identity, work-life, and well-being. These social stigmas are obstacles to an inclusive and accepting community and can deter people from choosing software development as a career. We also discuss the implications of using vlogs to support developers, researchers, and beyond.
... The importance of feedback on wellbeing at the workplace has been addressed by several studies, looking at deploying persuasive technologies for behavior change at the office (Rogers et al., 2010;Ludden and Meekhof, 2016). Finally, a few studies explored the impact of mood self-tracking and sharing on the individual and collective wellbeing in the lived workspace (Church et al., 2010;Mora et al., 2011;Meyer et al., 2017), albeit without addressing ambient feedback in that context. ...
Full-text available
As the COVID-19 pandemic has forced many to work remotely from home, collaborating solely through digital technologies, a growing population of remote home workers are faced with profound wellbeing challenges. Passive sensing devices and ambient feedback have great potential to support the wellbeing of the remote workers, but there is a lack of background and understanding of the domestic workplace in terms of physical and affective dimensions and challenges to wellbeing. There are profound research gaps on wellbeing in the domestic workplace, with the current push for remote home and hybrid working making this topic timely. To address these gaps and shape a starting point for an “ambient workspaces” agenda, we conducted an exploratory study to map physical and affective aspects of working from home. The study involved both qualitative and quantitative measures of occupant experience, including sensor wristbands, and a custom web application for self-reporting mood and aspects of the environment. It included 13 participants for a period of 4 weeks, during a period of exclusive home working. Based on quantitative and qualitative analysis, our study addresses wellbeing challenges of the domestic workplace, establishes correlations between mood and physical aspects, and discusses the impact of feedback mechanisms in the domestic workplace on the behavior of remote workers. Insights from these observations are then used to inform a future design agenda for ambient technologies that supports the wellbeing of remote workers; addressing the design opportunities for ambient interventions in domestic workspaces. This work offers three contributions: 1) qualitatively and quantitatively informed understandings of the experiences of home-workers; 2) a future design agenda for “ambient home workspaces”; and 3) we propose three design concepts for ambient feedback and human–AI interactions in the built environment, to illustrate the utility of the design agenda.
The COVID-19 pandemic fundamentally changed the nature of work by shifting most in-person work to a predominantly remote modality as a way to limit the spread of the coronavirus. In the process, the shift to working-from-home rapidly forced the large-scale adoption of groupware technologies. Although prior empirical research examined the experience of working-from-home within small-scale groups and for targeted kinds of work, the pandemic provides HCI and CSCW researchers with an unprecedented opportunity to understand the psycho-social impacts of a universally mandated work-from-home experience rather than an autonomously chosen one. Drawing on boundary theory and a methodological approach grounded in humanistic geography, we conducted a qualitative analysis of Reddit data drawn from two work-from-home-related subreddits between March 2020 and January 2021. In this paper, we present a characterization of the challenges and solutions discussed within these online communities for adapting work to a hybrid or fully remote modality, managing reconfigured work-life boundaries, and reconstructing the home's sense of place to serve multiple, sometimes conflicting roles. We discuss how these findings suggest an emergent interplay among adapted work practice, reimagined physical (and virtual) spaces, and the establishment and continual re-negotiation of boundaries as a means for anticipating the long-term impact of COVID on future conceptualizations of productivity and work.
Behavior change research usually takes a theory-driven or application-specific approach. We took a user-centered view of real-world user needs and conducted a survey with 53 participants to investigate the overall behavior change goals and support preferences of everyday users. Our survey revealed three key themes. First, individual users have multiple behavior change goals, desired context types for behavior change reminders, and desired activities for self-tracking. Second, users have diverse and personalized desired actions, implementations, contexts, and reminders for their behavior change goals, as well as diverse preferences for behavior change support features and sensors. Third, users want to set custom personalized goals, reminder contexts, reminder messages, and even train custom machine learning models. Thus, users want multiple, diverse, and personalized behavior change support in the real world. We suggest a ‘convergence with connection and customization’ approach to meet the diverse, multiple, and personalized behavior change needs of everyday users.
Conference Paper
Full-text available
Understanding developer productivity is important to deliver software on time and at reasonable cost. Yet, there are numerous definitions of productivity and, as previous research found, productivity means different things to different developers. In this paper, we analyze the variation in productivity perceptions based on an online survey with 413 professional software developers at Microsoft. Through a cluster analysis, we identify and describe six groups of developers with similar perceptions of productivity: social, lone, focused, balanced, leading, and goal-oriented developers. We argue why personalized recommendations for improving software developers’ work is important and discuss design implications of these clusters for tools to support developers’ productivity.
Conference Paper
Full-text available
Due to the high number and cost of interruptions at work, several approaches have been suggested to reduce this cost for knowledge workers. These approaches predominantly focus either on a manual and physical indicator, such as headphones or a closed office door, or on the automatic measure of a worker's interruptibilty in combination with a computer-based indicator. Little is known about the combination of a physical indicator with an automatic interruptibility measure and its long-term impact in the workplace. In our research, we developed the FlowLight, that combines a physical traffic-light like LED with an automatic interruptibility measure based on computer interaction data. In a large-scale and long-term field study with 449 participants from 12 countries, we found, amongst other results, that the FlowLight reduced the interruptions of participants by 46%, increased their awareness on the potential disruptiveness of interruptions and most participants never stopped using it.
Full-text available
Many software development organizations strive to enhance the productivity of their developers. All too often, efforts aimed at improving developer productivity are undertaken without knowledge about how developers spend their time at work and how it influences their own perception of productivity. To fill in this gap, we deployed a monitoring application at 20 computers of professional software developers from four companies for an average of 11 full workdays in situ. Corroborating earlier findings, we found that developers spend their time on a wide variety of activities and switch regularly between them, resulting in highly fragmented work. Our findings extend beyond existing research in that we correlate developers’ work habits with perceived productivity and also show productivity is a personal matter. Although productivity is personal, developers can be roughly grouped into morning, low-at-lunch and afternoon people. A stepwise linear regression per participant revealed that more user input is most often associated with a positive, and emails, planned meetings and work unrelated websites with a negative perception of productivity. We discuss opportunities of our findings, the potential to predict high and low productivity and suggest design approaches to create better tool support for planning developers’ workdays and improving their personal productivity.
Conference Paper
Full-text available
Feedback tools help people to monitor information about themselves to improve their health, sustainability practices, or personal well-being. Yet reasoning about personal data (e.g., pedometer counts, blood pressure readings, or home electricity consumption) to gain a deep understanding of your current practices and how to change can be challenging with the data alone. We integrate quantitative feedback data within a personal digital calendar; this approach aims to make the feedback data readily accessible and more comprehensible. We report on an eight-week field study of an on-calendar visualization tool. Results showed that a personal calendar can provide rich context for people to reason about their feedback data. The on-calendar visualization enabled people to quickly identify and reason about regular patterns and anomalies. Based on our results, we also derived a model of the behavior feedback process that extends existing technology adoption models. With that, we reflected on potential barriers for the ongoing use of feedback tools.
Personal informatics systems are tools that capture, aggregate and analyse data from distinct facets of their users' lives. This paper adopts a mixed methods approach to understand the problem of information overload in personal informatics systems. We report findings from a three-month study in which twenty participants collected multifaceted personal tracking data and used a system called 'Exist' to reveal statistical correlations within their data. We explore the challenges that participants faced in reviewing the information presented by Exist, and we identify characteristics that exemplify " interesting " correlations. Based on these findings, we develop automated filtering mechanisms that aim to prevent information overload and support users in extracting interesting insights. Our approach deals with information overload by reducing the number of correlations shown to users by ~55% on average, and increases the percentage of displayed correlations rated as interesting to ~81%, representing a 34 percentage point improvement over filters that only consider statistical significance at p<0.05. We demonstrate how this curation can be achieved using objective data harvested by the system, including the use of Google Trends data as a proxy for subjective user interest.
Conference Paper
We developed the Model of Regulation to provide a vocabulary for comparing and analyzing collaboration practices and tools in software engineering. This paper discusses the model's ability to capture how individuals self-regulate their own tasks and activities, how they regulate one another, and how they achieve a shared understanding of project goals and tasks. Using the model, we created an "action-oriented" instrument that individuals, teams, and organizations can use to reflect on how they regulate their work and on the various tools they use as part of regulation. We applied this instrument to two industrial software projects, interviewing one or two stakeholders from each project. The model allowed us to identify where certain processes and communication channels worked well, while recognizing friction points, communication breakdowns, and regulation gaps. We believe this model also shows potential for application in other domains.
Conference Paper
We offer a reflection on the technology usage for workplace quantification through an in the wild study. Using a prototype Quantified Workplace system equipped with passive and participatory sensing modalities, we collected and visualized different workplace metrics (noise, color, air quality, self reported mood, and self reported activity) in two European offices of a research organization for a period of 4 months. Next we surveyed 70 employees to understand their engagement experience with the system. We then conducted semi-structured interviews with 20 employees in which they explained which workplace metrics are useful and why, how they engage with the system and what privacy concerns they have. Our findings suggest that sense of inclusion acts as the initial incentive for engagement which gradually translates into a habitual routine. We found that incorporation of an anonymous participatory sensing aspect into the system could lead to sustained user engagement. Compared to past studies we observed a shift in the privacy concerns, due to the trust and transparency of our prototype system. We conclude by providing a set of design principles for building future Quantified Workplace systems.
Software developers use many different communication tools and channels in their work. The diversity of these tools has dramatically increased over the past decade and developers now have access to a wide range of socially enabled communication channels and social media to support their activities. The availability of such social tools is leading to a participatory culture of software development, where developers want to engage with, learn from, and co-create software with other developers. However, the interplay of these social channels, as well as the opportunities and challenges they may create when used together within this participatory development culture are not yet well understood. In this paper, we report on a large-scale survey conducted with 1,449 GitHub users. We discuss the channels these developers find essential to their work and gain an understanding of the challenges they face using them. Our findings lay the empirical foundation for providing recommendations to developers and tool designers on how to use and improve tools for software developers.