Todd Kulesza

Todd Kulesza
Google Inc. | Google · Engineering Department

PhD, Computer Science

About

23
Publications
8,808
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,848
Citations
Introduction
I'm a PhD Candidate at Oregon State University, advised by Margaret Burnett. My current research focuses on helping end users understand and control interactive machine learning systems.
Additional affiliations
September 2013 - December 2014
Oregon State University
Position
  • Research Assistant
Description
  • Teaching assistant for Data Structures (CS 261) and Usability Engineering (CS 352).
June 2013 - September 2013
Microsoft Research, Washington, USA
Position
  • Research Intern
September 2007 - September 2013
Oregon State University
Position
  • Research Assistant
Education
January 2010 - December 2014
Oregon State University
Field of study
  • Computer Science
September 2007 - December 2009
Oregon State University
Field of study
  • Computer Science
September 2001 - December 2005
Oakland University
Field of study
  • Computer Science and Engineering; History

Publications

Publications (23)
Article
Systems that can learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can resul...
Conference Paper
Full-text available
Labeling data is a seemingly simple task required for training many machine learning systems, but is actually fraught with problems. This paper introduces the notion of concept evolution, the changing nature of a person's underlying concept (the abstract notion of the target class a person is labeling for, e.g., spam email, travel related web pages...
Article
Full-text available
How do you test a program when only a single user, with no expertise in software testing, is able to determine if the program is performing correctly? Such programs are common today in the form of machine-learned classifiers. We consider the problem of testing this common kind of machine-generated program when the only oracle is an end user: e.g.,...
Conference Paper
Full-text available
What does a user need to know to productively work with an intelligent agent? Intelligent agents and recommender systems are gaining widespread use, potentially creating a need for end users to understand how these systems operate in order to fix their agent's personalized behavior. This paper explores the effects of mental model soundness on such...
Article
Full-text available
Machine learning techniques are increasingly used in intelligent assistants, that is, software targeted at and continuously adapting to assist end users with email, shopping, and other tasks. Examples include desktop SPAM filters, recommender systems, and handwriting recognition. Fixing such intelligent assistants when they learn incorrect behavior...
Chapter
One area of research in the end-user development area is known as end-user software engineering (EUSE). Research in EUSE aims to invent new kinds of technologies that collaborate with end users to improve the quality of their software. EUSE has become an active research area since its birth in the early 2000s, with a large body of literature upon w...
Conference Paper
Full-text available
Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human...
Conference Paper
Full-text available
How can end users efficiently influence the predictions that machine learning systems make on their behalf? This paper presents Explanatory Debugging, an approach in which the system explains to users how it made each of its predictions, and the user then explains any necessary corrections back to the learning system. We present the principles unde...
Conference Paper
Full-text available
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and complete...
Conference Paper
Many applications of Machine Learning (ML) involve interactions with humans. Humans may provide input to a learning algorithm (in the form of labels, demonstrations, corrections, rankings or evaluations) while observing its outputs (in the form of feedback, predictions or executions). Although humans are an integral part of the learning process, tr...
Article
Intelligent agents are becoming ubiquitous in the lives of users, but the research community has only recently begun to study how people establish trust in and communicate with such agents. I plan to design an explanation-centric approach to support end users in personalizing their intelligent agents and in assessing their strengths and weaknesses....
Article
Full-text available
Recent computer vision approaches are aimed at richer image interpretations that extend the standard recognition of objects in images (e.g., cars) to also recognize object attributes (e.g., cylindrical, has-stripes, wet). However, the more idiosyncratic and abstract the notion of an object attribute (e.g., cool car), the more challenging the task o...
Conference Paper
Full-text available
Intelligent assistants sometimes handle tasks too important to be trusted implicitly. End users can establish trust via systematic assessment, but such assessment is costly. This paper investigates whether, when, and how bringing a small crowd of end users to bear on the assessment of an intelligent assistant is useful from a cost/benefit perspecti...
Conference Paper
Full-text available
Intelligent assistants are handling increasingly critical tasks, but until now, end users have had no way to systematically assess where their assistants make mistakes. For some intelligent assistants, this is a serious problem: if the assistant is doing work that is important, such as assisting with qualitative research or monitoring an elderly pa...
Conference Paper
Full-text available
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a generated “program” tailored specifically to the behaviors of that end user, telling the computer what to do when future inputs arrive. Researchers, however, have only recently...
Conference Paper
Full-text available
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present...
Article
Full-text available
End-user programmers may not be aware of many software engineering practices that would add greater discipline to their efforts, and even if they are aware of them, these practices may seem too costly (in terms of time) to use. Without taking advantage of at least some of these practices, the software these end users create seems likely to continue...
Conference Paper
Full-text available
The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters, because without the ability to fix errors users may find that the learned program's errors are too damaging for them to be able to trust such programs. We present a...
Conference Paper
Full-text available
Recent research has begun to report that female end-user programmers are often more reluctant than males to employ features that are useful for testing and debugging. These earlier findings suggest that, unless such features can be changed in some appropriate way, there are likely to be important gender differences in end-user programmerspsila bene...
Article
Full-text available
Intelligent user interfaces, such as recommender systems and email classifiers, use machine learning algorithms to customize their behavior to the preferences of an end user. Although these learning systems are somewhat reliable, they are not perfectly accurate. Traditionally, end users who need to correct these learning systems can only provide mo...
Article
Full-text available
Pervasive systems for end users are becoming mainstream yet ways to make them transparent and controllable by users are still in their infancy. In this position paper we describe our work with other kinds of intelligent systems to make them intelligible and adaptable by end users. Our results could hold useful lessons for pervasive systems to bette...
Article
Graduation date: 2010 The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters because without the ability to fix errors, users may find that the learned program’s errors are too damaging for them to be able to trust such p...

Network

Cited By