Conference Paper

Fifth Annual Workshop on A/B Testing and Platform-Enabled Learning Research

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this issue, Cantor and colleagues synthesize a broad representation of the literature on the science of learning, and how learning changes over the course of development. Their perspective highlights three important factors about the emerging field of science of learning and development: (1) that it draws insights from increasingly diverse fields of research inquiry, from neuroscience and social science to computer science and adversity science; (2) that it provides a means to understand principles that generalize across learners, and yet also allow individual differences in learning to emerge and inform; and (3) that it recognizes that learning occurs in context, and is thus a shared responsibility between the learner, the instructor, and the environment. Here I discuss how this complex systems dynamical perspective can be integrated with the emerging framework of ‘learning engineering’ to provide a blueprint for significant innovations in education.
Article
Full-text available
Background/Context Large-scale randomized controlled experiments conducted in authentic learning environments are commonly high stakes, carrying extensive costs and requiring lengthy commitments for all-or-nothing results amidst many potential obstacles. Educational technologies harbor an untapped potential to provide researchers with access to extensive and diverse subject pools of students interacting with educational materials in authentic ways. These systems log extensive data on student performance that can be used to identify and leverage best practices in education and guide systemic policy change. Tomorrow's educational technologies should be built upon rigorous standards set forth by the research revolution budding today. Purpose/Objective/Research Question/Focus of Study The present work serves as a call to the community to infuse popular learning platforms with the capacity to support collaborative research at scale. Research Design This article defines how educational technologies can be leveraged for use in collaborative research environments by highlighting the research revolution of ASSISTments ( www.ASSISTments.org ), a popular online learning platform with a focus on mathematics education. A framework described as the cycle of perpetual evolution is presented, and research exemplifying progression through this framework is discussed in support of the many benefits that stem from infusing EdTech with collaborative research. Through a recent NSF grant (SI2-SSE&SSI: 1440753), researchers from around the world can leverage ASSISTments’ content and user population by designing and implementing randomized controlled experiments within the ASSISTments TestBed ( www.ASSISTmentsTestBed.org ). Findings from these studies help to define best practices within technology-driven learning, while simultaneously allowing for augmentation of the system's content, delivery, and infrastructure. Conclusions/Recommendations Supplementing educational technologies with environments for sound, collaborative science can result in a broad range of benefits for students, researchers, platforms, and educational practice and policy. This article outlines the successful uptake of research efforts by ASSISTments in hopes of advocating a research revolution for other educational technologies.
Conference Paper
Full-text available
Web-facing companies, including Amazon, eBay, Etsy, Facebook, Google, Groupon, Intuit, LinkedIn, Microsoft, Netflix, Shop Direct, StumbleUpon, Yahoo, and Zynga use online controlled experiments to guide product development and accelerate innovation. At Microsoft’s Bing, the use of controlled experiments has grown exponentially over time, with over 200 concurrent experiments now running on any given day. Running experiments at large scale requires addressing multiple challenges in three areas: cultural/organizational, engineering, and trustworthiness. On the cultural and organizational front, the larger organization needs to learn the reasons for running controlled experiments and the tradeoffs between controlled experiments and other methods of evaluating ideas. We discuss why negative experiments, which degrade the user experience short term, should be run, given the learning value and long-term benefits. On the engineering side, we architected a highly scalable system, able to handle data at massive scale: hundreds of concurrent experiments, each containing millions of users. Classical testing and debugging techniques no longer apply when there are millions of live variants of the site, so alerts are used to identify issues rather than relying on heavy up-front testing. On the trustworthiness front, we have a high occurrence of false positives that we address, and we alert experimenters to statistical interactions between experiments. The Bing Experimentation System is credited with having accelerated innovation and increased annual revenues by hundreds of millions of dollars, by allowing us to find and focus on key ideas evaluated through thousands of controlled experiments. A 1% improvement to revenue equals $10M annually in the US, yet many ideas impact key metrics by 1% and are not well estimated a-priori. The system has also identified many negative features that we avoided deploying, despite key stakeholders’ early excitement, saving us similar large amounts.
The job of a college president
  • Herbert A Simon
  • Simon Herbert A.
UpGrade: An open source tool to support A/B testing in educational software
  • Steve Ritter
  • April Murphy
  • Stephen Fancsali
  • Derek Lomas
  • Ritter Steve