Siddharth Suri’s research while affiliated with Microsoft and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Learning in the Repeated Secretary Problem
  • Article

August 2017

·

38 Reads

·

1 Citation

·

R. Preston McAfee

·

Siddharth Suri

·

James R. Wright

In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experience. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants' behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. Fitting such a threshold-based model to data reveals players' estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games.


VoxPL: Programming with the Wisdom of the Crowd
  • Conference Paper
  • Full-text available

May 2017

·

132 Reads

·

11 Citations

Having a crowd estimate a numeric value is the original inspiration for the notion of "the wisdom of the crowd." Quality control for such estimated values is challenging because prior, consensus-based approaches for quality control in labeling tasks are not applicable in estimation tasks. We present VoxPL, a high-level programming framework that automatically obtains high-quality crowdsourced estimates of values. The VoxPL domain-specific language lets programmers concisely specify complex estimation tasks with a desired level of confidence and budget. VoxPL's runtime system implements a novel quality control algorithm that automatically computes sample sizes and obtains high quality estimates from the crowd at low cost. To evaluate VoxPL, we implement four estimation applications, ranging from facial feature recognition to calorie counting. The resulting programs are concise---under 200 lines of code---and obtain high quality estimates from the crowd quickly and inexpensively.

Download


Figure 1. How users learned about each of the platforms studied.
Figure 2. How workers found our mapping HIT (n=4,856) which ran from April 23-May 28, 2014
Figure 3. How workers respond to instructions they don't understand across 4 platforms.
The Crowd is a Collaborative Network

February 2016

·

805 Reads

·

238 Citations

The main goal of this paper is to show that crowdworkers collaborate to fulfill technical and social needs left by the platform they work on. That is, crowdworkers are not the independent, autonomous workers they are often assumed to be, but instead work within a social network of other crowdworkers. Crowdworkers collaborate with members of their networks to 1) manage the administrative overhead associated with crowdwork, 2) find lucrative tasks and reputable employers and 3) recreate the social connections and support often associated with brick and mortar-work environments. Our evidence combines ethnography, interviews, survey data and larger scale data analysis from four crowdsourcing platforms, emphasizing the qualitative data from the Amazon Mechanical Turk (MTurk) platform and Microsoft’s proprietary crowdsourcing platform, the Universal Human Relevance System (UHRS). This paper draws from an ongoing, longitudinal study of Crowdwork that uses a mixed methods approach to understand the cultural meaning, political implications, and ethical demands of crowdsourcing.


Accounting for Market Frictions and Power Asymmetries in Online Labor Markets

November 2015

·

215 Reads

·

74 Citations

Policy & Internet

Amazon Mechanical Turk (AMT) is an online labor market that defines itself as "a marketplace for work that requires human intelligence." Early advocates and developers of crowdsourcing platforms argued that crowdsourcing tasks are designed so people of any skill level can do this labor online. However, as the popularity of crowdsourcing work has grown, the crowdsourcing literature has identified a peculiar issue: that work quality of workers is not responsive to changes in price. This means that unlike what economic theory would predict, paying crowdworkers higher wages does not lead to higher quality work. This has led some to believe that platforms, like AMT, attract poor quality workers. This article examines different market dynamics that might, unwittingly, contribute to the inefficiencies in the market that generate poor work quality. We argue that the cultural logics and socioeconomic values embedded in AMT's platform design generate a greater amount of market power for requesters (those posting tasks) than for individuals doing tasks for pay (crowdworkers). We attribute the uneven distribution of market power among participants to labor market frictions, primarily characterized by uncompetitive wage posting and incomplete information. Finally, recommendations are made for how to tackle these frictions when contemplating the design of an online labor market.

Citations (4)


... In addition, a few papers study the human capacity to learn the right cutoff after reviewing multiple independent candidate sets [18,6]. However, [30] develops a non cutoff-based strategy which is implemented regarding two distinct aims: to maximize the probability of selecting the Fekom, Vayatis, and Kalogeratos: The Warm-starting Sequential Selection Problem and its Multi-round Extension Article submitted to Operations Research; manuscript no. ...

Reference:

The Warm-starting Sequential Selection Problem and its Multi-round Extension
Learning in the Repeated Secretary Problem
  • Citing Article
  • August 2017

... According to this concept, a group of people may be smarter than each of its individuals, and when collaborating, a group of people can achieve better results (both quantitative and qualitative) than several individuals working alone. This concept is the keystone of many websites, such as Wikipedia, Stack Exchange, and Yahoo! answers, as well as systems such as eXo Platform [20], ShareLatex, and VoxPL [21]. ...

VoxPL: Programming with the Wisdom of the Crowd

... Previous research in the context of gig economy, micro work, and crowdsourcing [38,61,71,77,78] has demonstrated that digital labor communities frequently engage in mutual support and collective action to confront with challenges and exploitations from online platforms. Our findings reaffirm this collective labor, showing how creators, in attempts to mitigate the impact of shadowbanning, often rely on communal efforts to understand how to circumvent it. ...

The Crowd is a Collaborative Network

... OLMs are characterized by a monopsonistic power structure quantified through the measurement of labor supply elasticity, wherein requesters (i.e., employers) exert significant influence on market dynamics (Dube et al. 2020). In this context, Kingsley, Gray, and Suri (2015) argue that the market power of requesters may result from the design of OLM platforms, leading to information asymmetries during the labor process. In contrast, prior empirical findings indicate that crowd workers encounter pronounced wage disparities, potentially stemming from perceived uncertainty during task searches (Cantarella and Strozzi 2022) or attributable to their targeted-earning work patterns (Horton and Chilton 2010). ...

Accounting for Market Frictions and Power Asymmetries in Online Labor Markets
  • Citing Article
  • November 2015

Policy & Internet