Project

Moocita and the Work Learn Balance

Goal: Over the past decades classic work environments have changed rapidly from manual intensive labor to cognitive demanding jobs. We see the societal impact of this development in the socio economic changes in cities with declining heavy industry such as Chicago (USA) or Manchester (UK). In recent years artificial intelligence has also begun to take over other tasks such as driving cabs, grading essays, or tracking medication. This illustrates that we need an educational system that allows workers to transition from less knowledge intensive labor to high skill jobs.
Online courses offer a scalable model for education where students from all over the world can gain access to new skills. While online courses cover topics relevant to job growth in sectors such as data science and software engineering, courses alone do not adequately prepare students to find gainful employment nor do they attract those students who can benefit the most from affordable higher education – 60% of students in online education already have at least a bachelor’s degree.
The Moocita project investigates a new model for the work learn relationship that ultimately enables students to earn money while taking courses. The model also supports students to find jobs that match their skill sets and personality. The project explores three aspects of the new work learn model 1) increasing authenticity of assignments 2) improving the learning experience and outcome and 3) effectively matching students to real-world teams and jobs. You can find more information on our project and the results of our research at www.moocita.com.

Date: 1 June 2016

Updates

0 new
2
Recommendations

0 new
1
Followers

0 new
13
Reads

0 new
204

Project log

Margeret Hall
added a research item
Crowd sourcing and human computation has slowly become a mainstay for many application areas that seek to leverage the crowd in the development of high quality datasets, annotations, and problem solving beyond the reach of current AI solutions. One of the major challenges to the domain is ensuring high-quality and diligent work. In response, the literature has seen a large number of quality control mechanisms each voicing (sometimes domain-specific) benefits and advantages when deployed in largescale human computation projects. This creates a complex design space for practitioners: it is not always clear which mechanism(s) to use for maximal quality control. In this article, we argue that this decision is perhaps overinflated and that provided there is “some kind” of quality control that this obviously known to crowd workers this is sufficient for “high-quality” solutions. To evidence this, and provide a basis for discussion, we undertake two experiments where we explore the relationship between task design, task complexity, quality control and solution quality. We do this with tasks from natural language processing, and image recognition of varying complexity. We illustrate that minimal quality control is enough to repel constantly underperforming contributors and that this is constant across tasks of varying complexity and formats. Our key takeaway: quality control is necessary, but seemingly not how it is implemented.
Margeret Hall
added a research item
Massive Open Online Courses (MOOCs) are a scalable technology for upskilling but have primarily been successful with highly resourced and well-educated populations. In spite of their low barriers, people experiencing homelessness have not generally benefited from MOOCs. Work-Learn is a new model which utilizes action design research in order to provide individuals experiencing homelessness with the skills and scaffolding that will enable them to benefit from the digital economy. This Work in Progress reports the results of interviews and focus groups focusing on minimum hiring requirements, conducted with eight individuals who are in senior leadership, hiring managers, or training for web development or enterprise computing functions. Preliminary results suggest that after a baseline aptitude has been established, that person-position fit is the critical aspect to consider in instances of homelessness. The mainframe community may represent a unique opportunity due in part to its need for new talent.
Margeret Hall
added 2 research items
Intrinsic to the transition towards, and necessary for the success of digital platforms as a service (at scale) is the notion of human computation. Going beyond ‘the wisdom of the crowd’, human computation is the engine that powers platforms and services that are now ubiquitous like Duolingo and Wikipedia. In spite of increasing research and population interest, several issues remain open and in debate on largescale human computation projects. Quality control is first among these discussions. We conducted an experiment with three different tasks of varying complexity and five different methods to distinguish and protect against constantly underperforming contributors. We illustrate that minimal quality control is enough to repel constantly underperforming contributors and that this effect is constant across tasks of varying complexity.
The Work-Learn project addresses three novel gaps in IS research: generalizability of online learning tools; tech workforce development; and how IS research can contribute to the emergent trans-discipline focus on empirical validation in social justice programming. Disparities in accessing online education are an example of how adults experiencing homelessness do not benefit from the opportunities of the digital age to the same degree as better-resourced communities. Economic, social, and educational disparities reinforce-and are reinforced by-a lack of access to and knowledge about new Information Technologies. Simply connecting underserved communities to the Internet is insufficient; to realize equitable opportunities, learners need additional infrastructure and incentives to support their ability to leverage digital opportunities. Work-Learn provides these supports and interlocks the learning-by-doing aspects of apprenticeships as a part of a public-private partnership. Learners benefit from targeted financial remuneration, akin to paid crowdwork, to support extrinsic motivation. The foci are on theoretical and empirical connections between online education (MOOCs) and job opportunities, and reducing the financial gap between lesser-resourced communities and traditional learners by providing students with an income during their studies. We hypothesize that the extrinsic value-add of paid learning will buttress learners' intrinsic motivation of gaining financially stable careers. The research effort behind this hypothesis has value to the research community regardless of a positive or negative finding; if our hypothesis is supported, the findings could inform upskilling programs broadly speaking. If our data is consistent with the global research findings of MOOCs being a beneficial tool for <10% of the population, more fundamental research into the design and delivery of these platforms is required as it suggests that the specific affordances of MOOCs are likely driving their low completion rate. Work-Learn is designed to generate rich formative and summative data on how the model can support underserved learners, and how public-private partnerships can help marginalized individuals retrain for jobs in IT. Participants will be interviewed at the beginning and end of the modules to obtain demographic information and clarification about prior workforce, housing, and computing experiences. Interviews ensure that learners understand the assignments, with interviewers able to ask probing and clarifying questions to fully explore participants' expectations and experiences with the project. The MOOC will be outfitted with learning analytics capacities to collect data on participants as they interact with learning materials, assignments, and assessments. This will help inform which material may be not challenging enough and where participants are struggling to be successful, which will inform curricular revisions. Interviews with hiring managers at industry partners will elucidate successes and barriers in hiring and retention of these newly-minted apprentices whose education and job history typically otherwise precluded inclusion.
Margeret Hall
added a research item
Intrinsic to the transition towards, and necessary for the success of digital platforms as a service (at scale) is the notion of human computation. Going beyond ‘the wisdom of the crowd’, human computation is the engine that powers platforms and services that are now ubiquitous like Duolingo and Wikipedia. In spite of increasing research and population interest, several issues remain open and in debate on largescale human computation projects. Quality control is first among these discussions. We conducted an experiment with three different tasks of varying complexity and five different methods to distinguish and protect against constantly underperforming contributors. We illustrate that minimal quality control is enough to repel constantly underperforming contributors and that this is constant across tasks of varying complexity.
Markus Krause
added a research item
We present a case study of Mooqita, a platform to support learners in online courses by enabling them to earn money, gather real job task experiences, and build a meaningful portfolio. This includes placing optional additional assignments in online courses. Learners solve these individual assignments, provide peer reviews for other learners, and give feedback on each review they receive. Based on these data points teams are selected to work on a final paid assignment. Companies offer these assignments and in return receive interview recommendations from the pool of learners together with solutions for their challenges. We report the results of a pilot deployment in an online programming course offered by UC BerkeleyX. Six learners out of 158 participants were selected for the paid group assignment paying $600 per person. Four of these six were invited for interviews at the participating companies Crowdbotics (2) and Telefonica Innovation Alpha (2).
Margeret Hall
added a research item
Intrinsic to the transition towards, and necessary for the success of digital platforms as a service (at scale) is the notion of human computation. Going beyond 'the wisdom of the crowd', human computation is the engine that powers platforms and services that are now ubiquitous like Duolingo and Wikipedia. In spite of increasing research and population interest, several issues remain open and in debate on largescale human computation projects. Quality control is first among these discussions. We conducted an experiment with three different tasks of varying complexity and five different methods to distinguish and protect against constantly underperforming contributors. We illustrate that minimal quality control is enough to repel constantly underperforming contributors and that this is constant across tasks of varying complexity.
Margeret Hall
added a research item
Mooqita creates, delivers, and proctors a massively open online course (MOOC) in conjunction with a regional homeless shelter and local industry for people currently experiencing homelessness. It is estimated that 25% of the homeless are working or have worked within the past quarter. Additionally, around 30% of homeless in the metro area have previously served in the armed forces. Some number of the homeless are employed or employable: still, these numbers are an indicator that the local homeless population is suffering from a basic mismatch of skills and the local labor market. Mooqita providdes an ideal match because local companies face a shortage of workers, while potential em-ployees struggle with a variety of challenges that may interfere with gaining the appropriate training and experience required by those employers. By designing the program specifically to target this population, we meet their needs for training and valuable job skills in a uniquely challenging situation. The self-paced course will concentrate on three locally in-demand technology skills. The value of this approach to learners is three-fold. Learners will learn by doing with practical, near-to-real world challenges. They will graduate from the MOOC with in-de-mand skillsets. Finally, tying learning outcomes to small payments incentivizes course completion while gaining financial stability. Local industry will also ben-efit from a stronger workforce pipeline trained to their specific needs.
Markus Krause
added an update
As our first challenge is now complete I think it is time for an update. We had a total of 158 Students on our website for the experiment. 88 (~55%) of these students started the challenge, 14 (9%) submitted a solution, and 12 (8%) solutions were correct (solved the challenge).
The average peer review rating for solutions is 3.2 (SD=1.8) out of 5. We saw the full spectrum of ratings (1 to 5). The average solution length is 270 words plus source code. More important for us however are the reviews. The average rating of reviews is 4.3 out of 5! Only one review was rated below 3. The average review length is a 180 words. Reading the reviews I was amazed by the quality. Almost all were extremely well written - taking into account that we only had two native speakers. The reviews were polite, yet candid, and helpful. This was also acknowledged in the feedback we collected on the reviews. The average feedback length is 80 words.
The second challenge already started and we see this trend to. If this trend continues we think we will have a spot for each student for the paid group assignment. If the numbers stabilize the payment per student will be ~$300 – with 12 participating students.
 
Markus Krause
added 2 research items
Massive Open Online Courses (MOOCs) aim to educate the world, especially learners from developing countries. While MOOCs are certainly available to the masses, they are not yet fully accessible. Although all course content is just clicks away, deeply engaging with a MOOC requires a substantial time commitment, which frequently becomes a barrier to success. To mitigate the time required to learn from a MOOC, we here introduce a design that enables learners to earn money by applying what they learn in the course to real-world marketplace tasks. We present a Paid Task Recommender System (Rec-$ys), which automatically recommends course-relevant tasks to learners as drawn from online freelance platforms. Rec-$ys has been deployed into a data analysis MOOC and is currently under evaluation.
Markus Krause
added a research item
Workers' level of skill on online labor platforms varies greatly. To manage quality, requesters often decompose complex jobs into simple, repetitive, low-pay micro-tasks. Re-questers get the jobs done, but workers have little opportunity to learn and prepare for more demanding and better-pay jobs. This paper explores how we can apply learning science to crowd work through different scaffolding approaches—ex-amples, rubrics, task rationale, and step-by-step instruc-tions—so that workers can learn on the job. In a between-subjects study, novice workers from a low-paying task market performed tasks (writing product reviews and designing a slide deck) selected from a high-pay contractor market. Participants received either the original or a scaffolded task description. Blind-to-condition experts judged the performance of submissions. We found that scaffolding the crowd led low-pay micro-task workers to perform significantly better than workers without scaffolding. Moreover, a follow-up analysis shows that scaffolding low-pay micro-task workers results in work that's on par with workers from the high-pay contractor platform. We discuss how crowdsourcing infrastructures can implement such on-the-job learning methods and thereby enable workers to transition from novice to more expert roles.
Markus Krause
added 2 research items
Ensuring authorship in online taken exams is a major challenge for e-learning in general and MOOC's in particular. In this paper, we introduce and evaluate a method to verify student identities using stylometry. We present a carefully composed feature set and use it with a K-Nearest Neighbor algorithm. We demonstrate that our method can effectively authenticate authors and is robust against imitation attacks.
Markus Krause
added 2 research items
Fraud detection in free and natural text submissions is a major challenge for educators in general. It is even more challenging to detect plagiarism at scale and in online classes such as Massive Open Online Courses. In this paper, we introduce a novel method that analyses the writing style of an author (stylometry) to identify plagiarism. We will show that our system scales to thousands of submissions and students. For a given test set of ~4000 users our algorithm shows F-scores of over 90%.
Many MOOCs report high drop off rates for their students. Among the factors reportedly contributing to this picture are lack of motivation, feelings of isolation, and lack of interac-tivity in MOOCs. This paper investigates the potential of gamification with social game elements for increasing retention and learning success. Students in our experiment showed a significant increase of 25% in retention period (videos watched) and 23% higher average scores when the course interface was gamified. Social game elements amplify this effect significantly – students in this condition showed an increase of 50% in retention period and 40% higher average test scores.
Markus Krause
added 2 research items
Email is a standard and popular means of establishing potential business relationships between salespeople and future customers. Therefore, predicting if an e-mail will be successful in capturing a prospect's attention and elicit a response is an important problem for marketers and salespeo-ple alike. In this paper we propose a natural language model that predicts whether a sales e-mail will get a response from an unsolicited audience. We test our algorithm with a set of 116 outbound sales e-mails used in practice. Our algorithm is successfully able to predict if an e-mail in this set received a response with an F1 score of 0.81.
Peer reviews are a viable method to handle grading in large online classes such as MOOCs or to ensure response quality in crowdsourcing. On the other hand poorly written reviews can significantly harm student and contributor satisfaction. We present a method based on our Natural Language Model to provide feedback for reviewers. We report the results of a user study that illustrates the positive effects of our method on perceived review quality. In our experiment our language based method increases perceived quality by almost 20%.
Markus Krause
added a research item
Designers are increasingly leveraging online crowds; yet, on-line contributors may lack the expertise, context, and sensitivity to provide effective critique. Rubrics help feedback providers but require domain experts to write them and may not generalize across design domains. This paper introduces and tests a novel semi-automated method to support feedback providers by analysing feedback language. In our first study, 52 students from two design courses created design solutions and received feedback from 176 online providers. Instructors , students, and crowd contributors rated the helpfulness of each feedback response. From this data, an algorithm extracted a set of natural language features (e.g., specificity, sentiment etc.) that correlated with the ratings. The features accurately predicted the ratings and remained stable across different raters and design solutions. Based on these features, we produced a critique style guide with feedback examples automatically selected for each feature to help providers revise his or her feedback through self-assessment. In a second study, we tested the validity of the guide through a between-subjects experiment (n=50). Providers wrote feedback on design solutions with or without the guide. Providers generated feedback with higher perceived helpfulness when using our style-based guidance.
Markus Krause
added an update
We are happy to announce that we now have a website about our research: www.moocita.org
 
Markus Krause
added 5 research items
Massive Open Online Courses (MOOCs) aim to educate the world. More often than not, however, MOOCs fall short of this goal — a majority of learners are already highly educated (with a Bachelor degree or more) and come from specific parts of the (developed) world. Learners from developing countries without a higher degree are underrepresented, though desired, in MOOCs. One reason for those learners to drop out of a course can be found in their financial realities and the subsequent limited amount of time they can dedicate to a course besides earning a living. If we could pay learners to take a MOOC, this hurdle would largely disappear. With MOOCS, this leads to the following fundamental challenge: How can learners be paid at scale? Ultimately, we envision a recommendation engine that recommends tasks from online market places such as Upwork or witmart to learners, that are relevant to the course content of the MOOC. In this manner, the learners learn and earn money. To investigate the feasibility of this vision, in this paper we explored to what extent (1) online market places contain tasks relevant to a specific MOOC, and (2) learners are able to solve real-world tasks correctly and with sufficient quality. Finally, based on our experimental design, we were also able to investigate the impact of real-world bonus tasks in a MOOC on the general learner population.
The first AAAI Conference on Human Computation and Crowdsourcing (HCOMP-2013) was held November 6-9, 2013, in Palm Springs, California. Three workshops took place on Saturday, November 9: Crowdsourcing at Scale (full day), Disco: Human and Machine Learning in Games (full day), and Scaling Speech, Language Understanding, and Dialogue through Crowdsourcing (half day). This report summarizes the activities of those three events.
Human behavior increasingly involves digital online software, where the activities and resources that support (1) learning, (2) work, and (3) collaboration overlap and are placed in far greater proximity than the physical world -- often just a browser-tab or window away. What scientific and practical gains in 21st century learning, work, and collaboration can be achieved by integrating and contrasting these three areas' relevant technologies, scientific communities, and industry practitioners? For example: How can software for collaborative work incorporate learning? Which methods are effective for coordinating diverse experts to iteratively improve online educational resources? How can online learning improve the skill set and labor force for crowd work? What kinds of computational frameworks exist to jointly optimize the learning of skills and the use of these skills to achieve practical goals? This workshop tackles such questions by bringing together participants from industry (e.g., platforms similar to Odesk, Amazon Mechanical Turk); education, psychology, and MOOCs (e.g., attendees of AERA, EDM, AIED, Learning at Scale); crowdsourcing and collaborative work (e.g., attendees of CHI, CSCW, NIPS, AAAI's HCOMP).
Margeret Hall
added a project reference
Markus Krause
added a project goal
Over the past decades classic work environments have changed rapidly from manual intensive labor to cognitive demanding jobs. We see the societal impact of this development in the socio economic changes in cities with declining heavy industry such as Chicago (USA) or Manchester (UK). In recent years artificial intelligence has also begun to take over other tasks such as driving cabs, grading essays, or tracking medication. This illustrates that we need an educational system that allows workers to transition from less knowledge intensive labor to high skill jobs.
Online courses offer a scalable model for education where students from all over the world can gain access to new skills. While online courses cover topics relevant to job growth in sectors such as data science and software engineering, courses alone do not adequately prepare students to find gainful employment nor do they attract those students who can benefit the most from affordable higher education – 60% of students in online education already have at least a bachelor’s degree.
The Moocita project investigates a new model for the work learn relationship that ultimately enables students to earn money while taking courses. The model also supports students to find jobs that match their skill sets and personality. The project explores three aspects of the new work learn model 1) increasing authenticity of assignments 2) improving the learning experience and outcome and 3) effectively matching students to real-world teams and jobs. You can find more information on our project and the results of our research at www.moocita.com.