Conference Paper

Subversive Learning Analytics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper puts forth the idea of a subversive stance on learning analytics as a theoretically-grounded means of engaging with issues of power and equity in education and the ways in which they interact with the usage of data on learning processes. The concept draws on efforts from fields such as socio-technical systems and critical race studies that have a long history of examining the role of data in issues of race, gender and class. To illustrate the value that such a stance offers the field of learning analytics, we provide examples of how taking a subversive perspective can help us to identify tacit assumptions-in-practice, ask generative questions about our design processes and consider new modes of creation to produce tools that operate differently in the world.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To provide assistance that would improve students' learning by off ering adequate teacher-support mechanisms, some LA researchers argue for the need of a "subversive" LA that aims to grapple with the ramifi cations of LA research eff orts and critically engage with the ways in which power, race, gender, and class infl uence and are infl uenced by LA work (Wise, Sarmiento, & Boothe, 2021). Th is becomes especially critical in the context of the recent turn to online learning worldwide, which has contributed to the considerably increased quantity and also kinds of data being generated about students and by students at diff erent levels of education. ...
... Th is becomes especially critical in the context of the recent turn to online learning worldwide, which has contributed to the considerably increased quantity and also kinds of data being generated about students and by students at diff erent levels of education. Th is, in turn, requires LA researchers and practitioners to address questions about bias, equity, surveillance, ownership (of data), control, and agency in LA use (Wise et al., 2021). ...
... Hence, a coherent understanding requires careful analysis of learning contexts and actors so as to be able to address critical questions about (algorithmic) bias, equity, surveillance, ownership (of data), control, and agency in LA design and use (Wise et al., 2021). Contextual understanding may also require taking into account qualitative data, yet largely unexplored by the LA community-for example, cultural diff erences (e.g., in terms of power distance) that may infl uence the adoption and the eff ectiveness of LA interventions (see, e.g., Davis et al., 2017;Kizilcec & Cohen, 2017). ...
... To provide assistance that would improve students' learning by off ering adequate teacher-support mechanisms, some LA researchers argue for the need of a "subversive" LA that aims to grapple with the ramifi cations of LA research eff orts and critically engage with the ways in which power, race, gender, and class infl uence and are infl uenced by LA work (Wise, Sarmiento, & Boothe, 2021). Th is becomes especially critical in the context of the recent turn to online learning worldwide, which has contributed to the considerably increased quantity and also kinds of data being generated about students and by students at diff erent levels of education. ...
... Th is becomes especially critical in the context of the recent turn to online learning worldwide, which has contributed to the considerably increased quantity and also kinds of data being generated about students and by students at diff erent levels of education. Th is, in turn, requires LA researchers and practitioners to address questions about bias, equity, surveillance, ownership (of data), control, and agency in LA use (Wise et al., 2021). ...
... Hence, a coherent understanding requires careful analysis of learning contexts and actors so as to be able to address critical questions about (algorithmic) bias, equity, surveillance, ownership (of data), control, and agency in LA design and use (Wise et al., 2021). Contextual understanding may also require taking into account qualitative data, yet largely unexplored by the LA community-for example, cultural diff erences (e.g., in terms of power distance) that may infl uence the adoption and the eff ectiveness of LA interventions (see, e.g., Davis et al., 2017;Kizilcec & Cohen, 2017). ...
Preprint
Full-text available
Learning analytics (LA) is argued to be able to improve learning outcomes, learner support and teaching. However, despite an increasingly expanding amount of student (digital) data accessible from various online education and learning platforms and the growing interest in LA worldwide as well as considerable research efforts already made, there is still little empirical evidence of impact on practice that shows the effectiveness of LA in education settings. Based on a selection of theoretical and empirical research, this chapter provides a critical discussion about the possibilities of collecting and using student data as well as barriers and challenges to overcome in providing data-informed support to educators' everyday teaching practices. We argue that in order to increase the impact of data-driven decision-making aimed at students' improved learning in education at scale, we need to better understand educators' needs, their teaching practices and the context in which these practices occur, and how to support them in developing relevant knowledge, strategies and skills to facilitate the data-informed process of digitalization of education. Source: https://arxiv.org/abs/2105.06680 & a published version at https://www.routledge.com/Online-Learning-Analytics/Liebowitz/p/book/9781032047775
... Articulating the problem also allows us to ask second-order questions about what kinds of learning problems get addressed and who sets this agenda (Wise, Sarmiento, & Boothe, 2021). For example, the framing of "identifying and supporting struggling students" implies a situation in which changing the actions of the identified students is the best means to improve a problematic success rate, rather than considering changes to the material or curriculum that might make a course more accessible or inviting to particular groups of students. ...
Article
Full-text available
The ongoing changes and challenges brought on by the COVID-19 pandemic have exacerbated long-standing inequities in education, leading many to question basic assumptions about how learning can best benefit all students. Thirst for data about learning is at an all-time high, sometimes without commensurate attention to ensuring principles this community has long valued: privacy, transparency, openness, accountability, and fairness. How we navigate this dynamic context is critical for the future of learning analytics. Thinking about the issue through the lens of JLA publications over the last eight years, we highlight the important contributions of “problem-centric” rather than “tool-centric” research. We also value attention (proximal or distal) to the eventual goal of closing the loop, connecting the results of our analyses back to improve the learning from which they were drawn. Finally, we recognize the power of cycles of maturation: using information generated about real-world uses and impacts of a learning analytics tool to guide new iterations of data, analysis, and intervention design. A critical element of context for such work is that the learning problems we identify and choose to work on are never blank slates; they embed societal structures, reflect the influence of past technologies; and have previous enablers, barriers and social mediation acting on them. In that context, we must ask the hard questions: What parts of existing systems is our work challenging? What parts is it reinforcing? Do these effects, intentional or not, align with our values and beliefs? In the end what makes learning analytics matter is our ability to contribute to progress on both immediate and long-standing challenges in learning, not only improving current systems, but also considering alternatives for what is and what could be. This requires including stakeholder voices in tackling important problems of learning with rigorous analytic approaches to promote equitable learning across contexts. This journal provides a central space for the discussion of such issues, acting as a venue for the whole community to share research, practice, data and tools across the learning analytics cycle in pursuit of these goals.
... These practices may reinforce low expectations for certain student groups, such as exceptional students or English language learners. Wise et al. (2021) proposed alternative stances in learning analytics, such as prompting users to develop counter narratives that reflect on and recognize the different strengths of minoritized groups. We call for future research to consider how to detect and design for combinations of asking questions that are generative, as opposed to exacerbating biases. ...
Chapter
The proliferation of learning dashboards in K–12 education calls for deeper knowledge of how such tools fit into existing data use routines in schools. An exemplar routine is coaching cycles, where teachers and experienced coaches collaborate on using data to improve student learning. Within this context, our mixed-method study draws from interviews and think-aloud sessions about dashboard visualizations, conducted with teachers and instructional coaches from four school districts in the United States. Our analyses illuminate how different professional roles express varied patterns of response when facing LA dashboards. The analyses uncover a particular pattern of asking questions to resolve uncertainty that leads to further reflection and action. We discuss how uncertainty towards data and visualizations can be productive for teacher learning and the implications of designing for uncertainty in dashboards. Mapping LA dashboards to educators’ daily routines is important to promote the uptake of analytics towards improving instructional practices.
Preprint
Learning analytics (LA) provides data-driven feedback that aims to improve learning and inform action. For learners, LA-based feedback may scaffold self-regulated learning skills, which are crucial to learning success. For teachers, LA-based feedback may help the evaluation of teaching effects and the need for interventions. However, the current development of LA has presented problems related to the cognitive, social-affective, and structural dimensions of feedback. In light of this, this position paper argues that attention needs to shift from the design of LA as a feedback product to one that facilitates a process in which both teachers and students play active roles in meaning-making. To this end, implications for feedback literacy in the context of LA are discussed.
Article
There is a huge and growing amount of data that is already captured in the many, diverse digital tools that support learning. Additionally, learning data is often inaccessible to teachers or served in a manner that fails to support or inform their teaching and design practice. We need systematic, learner-centred ways for teachers to design learning data that supports them. Drawing on decades of Artificial Intelligence in Education (AIED) research, we show how to make use of important AIED concepts: (1) learner models; (2) Open Learner Models (OLMs); (3) scrutability and (4) Ontologies. We show how these concepts can be used in the design of OLMs, interfaces that enable a learner to see and interact with an externalised representation of their learning progress. We extend this important work by demonstrating how OLMs can also drive a learner-centred design process of learning data. We draw on the work of Biggs on constructive alignment (Biggs, 1996, 1999, 2011), which has been so influential in education. Like Biggs, we propose a way for teachers to design the learning data in their subjects and we illustrate the approach with case studies. We show how teachers can use this approach today, essentially integrating the design of learning data along with the learning design for their subjects. We outline a research agenda for designing the collection of richer learning data. There are three core contributions of this paper. First, we present the terms OLM, learner model, scrutability and ontologies, as thinking tools for systematic design of learning data. Second, we show how to integrate this into the design and refinement of a subject. Finally, we present a research agenda for making this process both easier and more powerful.
Article
This exploratory paper highlights how problem‐based learning (PBL) provided the pedagogical framework used to design and interpret learning analytics from Crystal Island: EcoJourneys, a collaborative game‐based learning environment centred on supporting science inquiry. In Crystal Island: EcoJourneys, students work in teams of four, investigate the problem individually and then utilize a brainstorming board, an in‐game PBL whiteboard that structured the collaborative inquiry process. The paper addresses a central question: how can PBL support the interpretation of the observed patterns in individual actions and collaborative interactions in the collaborative game‐based learning environment? Drawing on a mixed method approach, we first analyzed students' pre‐ and post‐test results to determine if there were learning gains. We then used principal component analysis (PCA) to describe the patterns in game interaction data and clustered students based on the PCA. Based on the pre‐ and post‐test results and PCA clusters, we used interaction analysis to understand how collaborative interactions unfolded across selected groups. Results showed that students learned the targeted content after engaging with the game‐based learning environment. Clusters based on the PCA revealed four main ways of engaging in the game‐based learning environment: students engaged in low to moderate self‐directed actions with (1) high and (2) moderate collaborative sense‐making actions, (3) low self‐directed with low collaborative sense‐making actions and (4) high self‐directed actions with low collaborative sense‐making actions. Qualitative interaction analysis revealed that a key difference among four groups in each cluster was the nature of verbal student discourse: students in the low to moderate self‐directed and high collaborative sense‐making cluster actively initiated discussions and integrated information they learned to the problem, whereas students in the other clusters required more support. These findings have implications for designing adaptive support that responds to students' interactions with in‐game activities. Practitioner notes What is already known about this topic Learning analytic methods have been effective for understanding student learning interactions for the purposes of assessment, profiling student behaviour and the effectiveness of interventions. However, the interpretation of analytics from these diverse data sets are not always grounded in theory and challenges of interpreting student data are further compounded in collaborative inquiry settings, where students work in groups to solve a problem. What this paper adds Problem‐based learning as a pedagogical framework allowed for the design to focus on individual and collaborative actions in a game‐based learning environment and, in turn, informed the interpretation of game‐based analytics as it relates to student's self‐directed learning in their individual investigations and collaborative inquiry discussions. The combination of principal component analysis and qualitative interaction analysis was critical in understanding the nuances of student collaborative inquiry. Implications for practice and/or policy Self‐directed actions in individual investigations are critical steps to collaborative inquiry. However, students may need to be encouraged to engage in these actions. Clustering student data can inform which scaffolds can be delivered to support both self‐directed learning and collaborative inquiry interactions. All students can engage in knowledge‐integration discourse, but some students may need more direct support from teachers to achieve this. What is already known about this topic Learning analytic methods have been effective for understanding student learning interactions for the purposes of assessment, profiling student behaviour and the effectiveness of interventions. However, the interpretation of analytics from these diverse data sets are not always grounded in theory and challenges of interpreting student data are further compounded in collaborative inquiry settings, where students work in groups to solve a problem. What this paper adds Problem‐based learning as a pedagogical framework allowed for the design to focus on individual and collaborative actions in a game‐based learning environment and, in turn, informed the interpretation of game‐based analytics as it relates to student's self‐directed learning in their individual investigations and collaborative inquiry discussions. The combination of principal component analysis and qualitative interaction analysis was critical in understanding the nuances of student collaborative inquiry. Implications for practice and/or policy Self‐directed actions in individual investigations are critical steps to collaborative inquiry. However, students may need to be encouraged to engage in these actions. Clustering student data can inform which scaffolds can be delivered to support both self‐directed learning and collaborative inquiry interactions. All students can engage in knowledge‐integration discourse, but some students may need more direct support from teachers to achieve this.
Article
Full-text available
Due to the ongoing digitalisation of workplaces and educational settings, human activity underpinning learning and work is increasingly mediated by technology. The advancement of artificial intelligence (AI) and its integration into everyday technologies influences how people are exposed to information, interact, learn and make decisions. We argue that technology, data and evolving AI applications affect how humans enact and experience life and work, changing the context for learning. Hence, as this paper argues, the current notion of lifelong learning needs a revisit to embrace technology at its foundation. To bring freely chosen goals and ownership in one's learning to the fore, in the context of the coming AI age, we argue for the telos of learning to shift from human capital to human development, with the spotlight on capabilities. The paper draws on the capability approach to inform individuals and organisations of how they can support human development throughout lifelong learning. We then move to provide examples of how technologies underpinning workplace practices can be seen with the focus on capabilities as individuals learn to create value.
Conference Paper
Full-text available
The potential for data-driven algorithmic systems to amplify existing social inequities, or create new ones, is receiving increasing popular and academic attention. A surge of recent work, across multiple researcher and practitioner communities, has focused on the development of design strategies and algorithmic methods to monitor and mitigate bias in such systems. Yet relatively little of this work has addressed the unique challenges raised in the design, development, and real-world deployment of learning analytics systems. This interactive workshop aims to provide a venue for researchers and practitioners to share work-in-progress related to fairness and equity in the design of learning analytics and to develop new research and design collaborations around these topics. The workshop will begin with a brief overview of research in fair AI and machine learning, followed by presentations of accepted and invited contributions. In addition, a key outcome of the workshop will be a research agenda for the LAK community, around fairness and equity. Workshop participants will collaboratively construct this agenda through a sequence of small-and whole-group design activities. At the end of the workshop, participating researchers and practitioners will then explore opportunities for collaboration around specific research and design thrusts within this agenda.
Article
Full-text available
In this article, we argue that dominant norms of demographic data are insufficient for accounting for the complexities that characterize many lesbian, gay, bisexual, transgender, and queer (LGBTQ, or broadly “queer”) lives. Here, we draw from the responses of 178 people who identified as non-heterosexual or non-cisgender to demographic questions we developed regarding gender and sexual orientation. Demographic data commonly imagines identity as fixed, singular, and discrete. However, our findings suggest that, for LGBTQ people, gender and sexual identities are often multiple and in flux. An overwhelming majority of our respondents reported shifting in their understandings of their sexual identities over time. In addition, for many of our respondents, gender identity was made up of overlapping factors, including the relationship between gender and transgender identities. These findings challenge researchers to reconsider how identity is understood as and through data. Drawing from critical data studies, feminist and queer digital media studies, and social justice initiatives like Data for Black Lives, we call for a reimagining of identity-based data as “queer data” or “data for queer lives.” We offer also recommendations for researchers to develop more inclusive survey questions. At the same time, we address the ways that queer perspectives destabilize the underlying logics of data by resisting classification and “capture.” For marginalized people, the stakes of this work extend beyond academia, especially in the era of algorithms and big data when the issue of who is or is not “counted” profoundly affects visibility, access, and power in the digital realm.
Article
Full-text available
Education is a particularly important site for the study of data and its consequences. The scale and diversity of education systems and practices means that datafication in education takes many forms, and has potential to exert significant effects on the lives of millions. Datafication of education needs to be understood and analyzed for its distinctive forms, practices and consequences. Enhanced data collection during mass university closures and online teaching as a result of the 2020 COVID-19 crisis makes this all the more urgent. In this brief editorial introduction to the special issue on ‘The datafication of teaching in higher education’, we situate the papers in wider debates and scholarship, and outline some key cross-cutting themes.
Conference Paper
Full-text available
As Learning Analytics (LA) moves from theory into practice, researchers have called for increased participation of stakeholders in design processes. The implementation of such methods, however, still requires attention and specification. In this report, we share strategies and insights from a co-design process that involved university students in the development of a LA tool. We describe the participatory design workshops and highlight three strategies for engaging students in the co-design of learning analytics tools.
Conference Paper
Full-text available
Empirical evidence of how background music benefits or hinders learning becomes the crux of optimizing music recommendation in educational settings. This study aims to further probe the underlying mechanism through an experiment in naturalistic setting. 30 participants were recruited to join a field experiment which was conducted in their own study places for one week. During the experiment, participants were asked to conduct learning sessions with music in the background and collect music tracks they deemed suitable for learning using a novel mobile-based music discovery application. A set of participant-related, context-related, and music-related data were collected via a pre-experiment questionnaire, surveys popped up in the music app, and the logging system of the music app. Preliminary results reveal correlations between certain music characteristics and learners’ task engagement and perceived task performance. This study is expected to provide evidence for understanding cognitive and emotional dimensions of background music during learning, as well as implications for the role of personalization in the selection of background music for facilitating learning.
Conference Paper
Full-text available
Educational recommender systems (ERSs) aim to adaptively recommend a broad range of personalised resources and activities to students that will most meet their learning needs. Commonly, ERSs operate as a "black box" and give students no insight into the rationale of their choice. Recent contributions from the learning analytics and educational data mining communities have emphasised the importance of transparent, understandable and open learner models (OLMs) that provide insight and enhance learners' understanding of interactions with learning environments. In this paper, we aim to investigate the impact of complementing ERSs with transparent and understandable OLMs that provide justification for their recommendations. We conduct a randomised control trial experiment using an ERS with two interfaces ("Non-Complemented Interface" and "Complemented Interface") to determine the effect of our approach on student engagement and their perception of the effectiveness of the ERS. Overall, our results suggest that complementing an ERS with an OLM can have a positive effect on student engagement and their perception about the effectiveness of the system despite potentially making the system harder to navigate. In some cases, complementing an ERS with an OLM has the negative consequence of decreasing engagement, understandability and sense of fairness.
Conference Paper
Full-text available
This paper describes the design and evaluation of personalized visualizations to support young learners' Self-Regulated Learning (SRL) in Adaptive Learning Technologies (ALTs). Our learning path app combines three Personalized Visualizations (PV) that are designed as an external reference to support learners' internal regulation process. The personalized visualizations are based on three pillars: grounding in SRL theory, the usage of trace data and the provision of clear actionable recommendations for learners to improve regulation. This quasi-experimental pre-posttest study finds that learners in the personalized visualization condition improved the regulation of their practice behavior, as indicated by higher accuracy and less complex moment-by-moment learning curves compared to learners in the control group. Learners in the PV condition showed better transfer on learning. Finally, students in the personalized visualizations condition were more likely to underestimate instead of overestimate their performance. Overall, these findings indicates that the personalized visualizations improved regulation of practice behavior, transfer of learning and changed the bias in relative monitoring accuracy.
Conference Paper
Full-text available
By devising a conceptual framework of the learning analytics ecosystem, we identify two types of bias that may stymie the efforts of leveraging learning analytics to produce fair and equitable virtual learning environments. First, Early-­-adopter Iteration Bias may lead learning analytics to derive insights about optimal course design based on preferences and behavior patterns of more prepared, lower need learners. Second, Research-­-praxis Bias prevents practitioners from properly utilizing insights derived from learning analytics and research.
Article
Full-text available
This paper examines visions of ‘learning’ across humans and machines in a near-future of intensive data analytics. Building upon the concept of ‘learnification’, practices of ‘learning’ in emerging big data-driven environments are discussed in two significant ways: the training of machines, and the nudging of human decisions through digital choice architectures. Firstly, ‘machine learning’ is discussed as an important example of how data-driven technologies are beginning to influence educational activity, both through sophisticated technical expertise and a grounding in behavioural psychology. Secondly, we explore how educational software design informed by behavioural economics is increasingly intended to frame learner choices to influence and ‘nudge’ decisions towards optimal outcomes. Through the growing influence of ‘data science’ on education, behaviourist psychology is increasingly and powerfully invested in future educational practices. Finally, it is argued that future education may tend toward very specific forms of behavioural governance – a ‘machine behaviourism’ – entailing combinations of radical behaviourist theories and machine learning systems, that appear to work against notions of student autonomy and participation, seeking to intervene in educational conduct and shaping learner behaviour towards predefined aims.
Conference Paper
Full-text available
Predictive modeling has been a core area of learning analytics research over the past decade, with such models currently deployed in a variety of educational contexts from MOOCs to K-12. However, analyses of the differential effectiveness of these models across demographic, identity, or other groups has been scarce. In this paper , we present a method for evaluating unfairness in predictive student models. We define this in terms of differential accuracy between subgroups, and measure it using a new metric we term the Absolute Between-ROC Area (ABROCA). We demonstrate the proposed method through a gender-based łslicing analysisž using five different models replicated from other works and a dataset of 44 unique MOOCs and over four million learners. Our results demonstrate (1) significant differences in model fairness according to (a) statistical algorithm and (b) feature set used; (2) that the gender imbalance ratio, curricular area, and specific course used for a model all display significant association with the value of the ABROCA statistic; and (3) that there is not evidence of a strict tradeoff between performance and fairness. This work provides a framework for quantifying and understanding how predictive models might inadvertently privilege, or disparately impact, different student subgroups. Furthermore, our results suggest that learning analytics researchers and practitioners can use slicing analysis to improve model fairness without necessarily sacrificing performance. 1
Article
Full-text available
This study investigates how multimodal user-generated data can be used to reinforce learner reflection, improve teaching practices, and close the learning analytics loop. In particular, the aim of the study is to utilize user gaze and action-based data to examine the role of a mirroring tool (i.e., Exercise View in Eclipse) in orchestrating basic behavioural regulation during debugging. The results demonstrated that students who processed the information presented in the Exercise View and acted upon it, improved their performance and achieved a higher level of success than those who failed to do so. The findings shed light on what constitutes relevant data within a particular learning context in programming using gaze patterns. Moreover, these findings could guide the collection of essential learner-centred analytics for designing usable, modular learning environments based on data-driven approaches.
Conference Paper
Full-text available
Learning analytics can bridge the gap between learning sciences and data analytics, leveraging the expertise of both fields in exploring the vast amount of data generated in online learning environments. A typical learning analytics intervention is the learning dashboard, a visualisation tool built with the purpose of empowering teachers and learners to make informed decisions about the learning process. Related work has investigated learning dashboards, yet none have explored the theoretical foundation that should inform the design and evaluation of such interventions. In this systematic literature review, we analyse the extent to which theories and models from learning sciences have been integrated into the development of learning dashboards aimed at learners. Our analysis revealed that very few dashboard evaluations take into account the educational concepts that were used as a theoretical foundation for their design. Furthermore, we report findings suggesting that comparison with peers, a common reference frame for contextualising information on learning analytics dashboards, was not perceived positively by all learners. We summarise the insights gathered through our literature review in a set of recommendations for the design and evaluation of learning analytics dashboards for learners.
Conference Paper
Full-text available
In order to further the field of learning analytics (LA), researchers and experts may need to look beyond themselves and their own perspectives and expertise to innovate LA platforms and interventions. We suggest that by co-creating with the users of LA, such as educators and students, researchers and experts can improve the usability, usefulness, and draw greater understanding from LA interventions. Within this article we discuss the current LA issues and barriers and how co-creation strategies can help address many of these challenges. We further outline the considerations, both pre- and during interventions, which support and foster a co-created strategy for learning analytics interventions.
Article
Full-text available
School accountability systems in the United States have been criticized on a number of fronts, mainly on grounds of completeness and fairness. This study examines an alternative school quality framework—one that seemingly responds to several core critiques of present accountability systems. Examining results from a pilot study in a diverse urban district, we find that this alternative system captures domains of school quality that are not reflected in the current state system, specifically those measuring opportunity to learn and socioemotional factors. Furthermore, we find a less deterministic relationship between school quality and poverty under the alternative system. We explore the policy implications of these findings vis-à-vis the future of accountability.
Article
Full-text available
In the socio-technical imaginary of higher education, algorithmic decision-making offers huge potential, but we also cannot deny the risks and ethical concerns. In fleeing from Frankenstein’s monster, there is a real possibility that we will meet Kafka on our path, and not find our way out of the maze of ethical considerations in the nexus between human and nonhuman agencies. In this conceptual article, I map seven dimensions of student surveillance on an experimental matrix of human-algorithmic interaction to consider some of the ethical implications of algorithmic decision-making in higher education. The experimental matrix of human-algorithmic decision-making uses the four tasks of ‘sensing’, ‘processing’, ‘acting’ and ‘learning’ to open up algorithmic-human agency as comprising a number of possibilities such as (1) where only humans perform the task; (2) where the task is shared between humans and algorithms; (3) where algorithms perform the task but with humans supervising; and (4) where algorithms perform the tasks with no human oversight. I use this matrix to engage with seven dimensions of how higher education institutions collect, analyse and use student data namely (1) automation; (2) visibility; (3) directionality; (4) assemblage; (5) temporality; (6) sorting; and (7) structuring. The article concludes by proposing a number of pointers to be taken into consideration when implementing algorithms in a higher education context from a position of an ethics of care.
Article
Full-text available
This paper draws on regulatory governance scholarship to argue that the analytic phenomenon currently known as ‘Big Data’ can be understood as a mode of ‘design-based’ regulation. Although Big Data decision-making technologies can take the form of automated decision-making systems, this paper focuses on algorithmic decision-guidance techniques. By highlighting correlations between data items that would not otherwise be observable, these techniques are being used to shape the informational choice context in which individual decision-making occurs, with the aim of channelling attention and decision-making in directions preferred by the ‘choice architect’. By relying upon the use of ‘nudge’ – a particular form of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives, these techniques constitute a ‘soft’ form of design-based control. But, unlike the static Nudges popularised by Thaler and Sunstein [(2008). Nudge. London: Penguin Books] such as placing the salad in front of the lasagne to encourage healthy eating, Big Data analytic nudges are extremely powerful and potent due to their networked, continuously updated, dynamic and pervasive nature (hence ‘hypernudge’). I adopt a liberal, rights-based critique of these techniques, contrasting liberal theoretical accounts with selective insights from science and technology studies (STS) and surveillance studies on the other. I argue that concerns about the legitimacy of these techniques are not satisfactorily resolved through reliance on individual notice and consent, touching upon the troubling implications for democracy and human flourishing if Big Data analytic techniques driven by commercial self-interest continue their onward march unchecked by effective and legitimate constraints.
Article
Full-text available
The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational data mining. This paper first introduces the field of learning analytics and outlines the lessons learned from well-known case studies in the research literature. The paper then identifies the critical topics that require immediate research attention for learning analytics to make a sustainable impact on the research and practice of learning and teaching. The paper concludes by discussing a growing set of issues that if unaddressed, could impede the future maturation of the field. The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research.
Article
Full-text available
Nell’era di Internet, delle tecnologie mobili e dell’istruzione aperta, la necessità di interventi per migliorare l’efficienza e la qualità dell’istruzione superiore è diventata pressante. I big data e il Learning Analytics possono contribuire a condurre questi interventi, e a ridisegnare il futuro dell’istruzione superiore. Basare le decisioni su dati e sulle evidenze empiriche sembra incredibilmente ovvio. Tuttavia, l’istruzione superiore, un campo che raccoglie una quantità enorme di dati sui propri “clienti”, è stata tradizionalmente inefficiente nell’utilizzo dei dati, spesso operando con notevole ritardo nell’analizzarli, pur essendo questi immediatamente disponibili. In questo articolo, viene evidenziato il valore delle tecniche di analisi dei dati per l’istruzione superiore, e presentato un modello di sviluppo per i dati legati all’apprendimento. Ovviamente, l’apprendimento è un fenomeno complesso, e la sua descrizione attraverso strumenti di analisi non è semplice; pertanto, l’articolo presenta anche le principali problematiche etiche e pedagogiche connesse all’utilizzo delle tecniche di analisi dei dati in ambito educativo. Cionondimeno, il Learning Analytics può penetrare la nebbia di incertezza che avvolge il futuro dell’istruzione superiore, e rendere più evidente come allocare le risorse, come sviluppare vantaggi competitivi e, soprattutto, come migliorare la qualità e il valore dell’esperienza di apprendimento.
Article
Full-text available
The field of learning analytics has the potential to enable higher education institutions to increase their understanding of their students’ learning needs and to use that understanding to positively influence student learning and progression. Analysis of data relating to students and their engagement with their learning is the foundation of this process. There is an inherent assumption linked to learning analytics that knowledge of a learner’s behavior is advantageous for the individual, instructor, and educational provider. It seems intuitively obvious that a greater understanding of a student cohort and the learning designs and interventions they best respond to would benefit students and, in turn, the institution’s retention and success rate. Yet collection of data and their use face a number of ethical challenges, including location and interpretation of data; informed consent, privacy, and deidentification of data; and classification and management of data. Approaches taken to understand the opportunities and ethical challenges of learning analytics necessarily depend on many ideological assumptions and epistemologies. This article proposes a sociocritical perspective on the use of learning analytics. Such an approach highlights the role of power, the impact of surveillance, the need for transparency, and an acknowledgment that student identity is a transient, temporal, and context-bound construct. Each of these affects the scope and definition of learning analytics’ ethical use. We propose six principles as a framework for considerations to guide higher education institutions to address ethical issues in learning analytics and challenges in context-dependent and appropriate ways.
Article
Full-text available
Many researchers have suggested that tangible user interfaces (TUIs) have potential for supporting learning. However, the theories used to explain possible effects are often invoked at a very broad level without explication of specific mechanisms by which the affordances of TUIs may be important for learning processes. Equally problematic, we lack theoretically grounded guidance for TUI designers as to what design choices might have significant impacts on learning and how to make informed choices in this regard. In this paper, we build on previous efforts to address the need for a structure to think about TUI design for learning by constructing the Tangible Learning Design Framework. We first compile a taxonomy of five elements for thinking about the relationships between TUI features, interactions and learning.We then briefly review cognitive, constructivist, embodied, distributed and social perspectives on cognition and learning and match specific theories to the key elements in the taxonomy to determine guidelines for design. In each case, we provide examples from previous work to explicate our guidelines; where empirical work is lacking, we suggest avenues for further research. Together, the taxonomy and guidelines constitute the Tangible Learning Design Framework. The framework advances thinking in the area by highlighting decisions in TUI design important for learning, providing initial guidance for thinking about these decisions through the lenses of theories of cognition and learning, and generating a blueprint for research on testable mechanisms of action by which TUI design can affect learning.
Article
In response to public scrutiny of data-driven algorithms, the field of data science has adopted ethics training and principles. Although ethics can help data scientists reflect on certain normative aspects of their work, such efforts are ill-equipped to generate a data science that avoids social harms and promotes social justice. In this article, I argue that data science must embrace a political orientation. Data scientists must recognize themselves as political actors engaged in normative constructions of society and evaluate their work according to its downstream impacts on people's lives. I first articulate why data scientists must recognize themselves as political actors. In this section, I respond to three arguments that data scientists commonly invoke when challenged to take political positions regarding their work. In confronting these arguments, I describe why attempting to remain apolitical is itself a political stance—a fundamentally conservative one—and why data science's attempts to promote “social good” dangerously rely on unarticulated and incrementalist political assumptions. I then propose a framework for how data science can evolve toward a deliberative and rigorous politics of social justice. I conceptualize the process of developing a politically engaged data science as a sequence of four stages. Pursuing these new approaches will empower data scientists with new methods for thoughtfully and rigorously contributing to social justice.
Article
Our 2019 editorial opened a dialogue about what is needed to foster an impactful field of learning analytics (Knight, Wise, & Ochoa, 2019). As we head toward the close of a tumultuous year that has raised profound questions about the structure and processes of formal education and its role in society, this conversation is more relevant than ever. That editorial, and a recent online community event, focused on one component of the impact: standards for scientific rigour and the criteria by which knowledge claims in an interdisciplinary, multi-methodology field should be judged. These initial conversations revealed important commonalities across statistical, computational, and qualitative approaches in terms of a need for greater explanation and justification of choices in using appropriate data, models, or other methodological approaches, as well as the many micro-decisions made in applying specific methodologies to specific studies. The conversations also emphasize the need to perform different checks (for overfitting, for bias, for replicability, for the contextual bounds of applicability, for disconfirming cases) and the importance of learning analytics research being relevant by situating itself within a set of educational values, making tighter connections to theory, and considering its practical mobilization to affect learning. These ideas will serve as the starting point for a series of detailed follow-up conversations across the community, with the goal of generating updated standards and guidance for JLA articles.
Article
This article summarizes some emerging concerns as learning analytics become implemented throughout education. The article takes a sociotechnical perspective — positioning learning analytics as shaped by a range of social, cultural, political, and economic factors. In this manner, various concerns are outlined regarding the propensity of learning analytics to entrench and deepen the status quo, disempower and disenfranchise vulnerable groups, and further subjugate public education to the profit-led machinations of the burgeoning “data economy.” In light of these charges, the article briefly considers some possible areas of change. These include the design of analytics applications that are more open and accessible, that offer genuine control and oversight to users, and that better reflect students’ lived reality. The article also considers ways of rethinking the political economy of the learning analytics industry. Above all, learning analytics researchers need to begin talking more openly about the values and politics of data-driven analytics technologies as they are implemented along mass lines throughout school and university contexts.
Article
This brief paper develops a series of provocations against the current forms of Learning Analytics that are beginning to be implemented in higher education contexts. The paper highlights a number of ways in which Learning Analytics can be experienced as discriminatory, oppressive and ultimately disadvantaging across whole student populations, and considers the limitations of current efforts within educational data science to increase awareness of ‘ethics’ and ‘social good’. This culminates in a stark choice: is it possible to substantially improve the field of Learning Analytics as it currently stands, or should we abandon it in favour of new forms of applying data science that are aligned with the experiences of non-conforming ‘learners’ and un-categorizable forms of ‘learning’?
Article
‘Data as technology’ has always been, and continues to be an essential part of the structuring of South African society and education, during and post-colonialism and post-apartheid. In the reconfiguration of South African education post-apartheid, student data constitutes a data frontier as un-mapped, under-utilised and ready for the picking. This article maps the data frontier in the nexus of higher education in the global/colonial present and the data imaginary that provides a particular vision in service of a neoliberal discursive position and ideological orientation. As such, the data frontier acts as ‘generative matrix’ for educational policy attempting to address the legacies of colonialism and apartheid. The value contribution of this article lies in its positioning of the data imaginary in the context of a neoliberal approach to education in the context of the Global South.
Article
The state of the world keeps me up at night, questioning my role as a social justice educator. I think with, through, and around what social change means. Reflecting on my practice, I have followed Western/colonial research and educational methodologies, knowing that they need to be challenged but often being unable to do so. I make present this living in contradiction in this personal narrative, a research methodology practiced for generations by people in the global south and by marginalized people in the United States. It is a reckoning of my work as a researcher, teacher, activist, and director of programs in the academic industrial complex. My desire for a decolonial option in art education requires me to interrogate its classificatory lenses. I explore social optics, drawing on examples through three lenses: art as inherently progressive; the interrelationship between visibility and invisibility; and artistic activism for organizing and building solidarity.
Article
This paper contributes a theoretical framework informed by historical, philosophical and ethnographic studies of science practice to argue that data should be considered to be actively produced, rather than passively collected. We further argue that traditional school science laboratory investigations misconstrue the nature of data and overly constrain student agency in their production. We use our “Data Production” framework to analyze activity of and interviews with high school students who created data using sensors and software in a ninth-grade integrated science class. To understand the opportunities for students to develop act with and perceive agency in data production, we analyze in detail the case of one student as she came to use unfamiliar technologies to produce data for her own personally relevant purposes. We find that her purposes for producing data emerged as she worked, and that resistances to her purposes were opportunities to act with and perceive her own agency, and to see data in new ways. We discuss implications for designing science learning experiences in which students act as agents in producing and using data.
Article
The design of effective learning analytics extends beyond sound technical and pedagogical principles. If these analytics are to be adopted and used successfully to support learning and teaching, their design process needs to take into account a range of human factors, including why and how they will be used. In this editorial, we introduce principles of human-centred design developed in other, related fields that can be adopted and adapted to support the development of Human-Centred Learning Analytics (HCLA). We draw on the papers in this special section, together with the wider literature, to define human-centred design in the field of learning analytics and to identify the benefits and challenges that this approach offers. We conclude by suggesting that HCLA will enable the community to achieve more impact, more quickly, with tools that are fit for purpose and a pleasure to use.
Conference Paper
This exploratory study challenges the current practices in cognitive load measurement by using multichannel data to investigate cognitive load affordances during online complex problem solving. Moreover, it is an attempt to investigate how cognitive load is related to strategy use. Accordingly, in the current study a well- and an ill-structured problem were developed in a virtual learning environment. Online support was provided. Participants were 15 students from the teacher training program. This study incorporated subjective measurements of students' cognitive load (i.e., intrinsic, extraneous, germane load and their mental effort) combined with physiological data containing galvanic skin response (GSR) and skin temperature (ST). A first aim was to investigate whether there was a significant difference for the subjective measurements, physiological data and consultation of support between the well-and ill-structured problem. Secondly this study investigated how individual differences of subjective measurements are related to individual differences of physiological data and consultation of support. Results reveal significant differences for intrinsic load, mental effort between a well- and ill-structured problem. Moreover, when investigating individual differences, findings reveal that GSR might be related to mental effort. Additionally, results indicate that cognitive load influences strategy use. Future research with larger sample sizes should verify these findings in order to have more insight into how we can measure cognitive load and how its related to self-directed learning. These insights should allow us to provide adaptive support in virtual learning environments.
Conference Paper
We study the usage of a self-guided online tutoring platform called Algebra Nation, which is widely by middle school and high school students who take the End-of-Course Algebra I exam at the end of the school year. This article aims to study how the platform contributes to increasing students' exam scores by examining users' logs over a three year period. The platform under consideration was used by more than 36,000 students in the first year, to nearly 67,000 by the third year, thus enabling us to examine how usage patterns evolved and influenced students' performance at scale. We first identify which Algebra Nation usage factors in conjunction with math overall preparation and socioeconomic factors contribute to the students' exam performance. Subsequently, we investigate the effect of increased teacher familiarity level with the Algebra Nation on students' scores across different grades through mediation analysis. The results show that the indirect effect of teacher's familiarity with the platform through increasing student's usage dosage is more significant in higher grades.
Conference Paper
The decision on what item to learn next in a course can be supported by a recommender system (RS), which aims at making the learning process more efficient and effective. However, learners and learning activities frequently change over time. The question is: how are timely appropriate recommendations of learning resources actually evaluated and how can they be compared? Researchers have found that, in addition to a standardized dataset definition, there is also a lack of standardized definitions of evaluation procedures for RS in the area of Technology Enhanced Learning. This paper argues that, in a closed-course setting, a time-dependent split into the training set and test set is more appropriate than the usual cross-validation to evaluate the Top-N recommended learning resources at various points in time. Moreover, a new measure is introduced to determine the timeliness deviation between the point in time of an item recommendation and the point in time of the actual access by the user. Different recommender algorithms, including two novel ones, are evaluated with the time-dependent evaluation framework and the results, as well as the appropriateness of the framework, are discussed.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Conference Paper
Developing communication skills in higher education students could be a challenge to professors due to the time needed to provide formative feedback. This work presents RAP, a scalable system to provide automatic feedback to entry-level students to develop basic oral presentation skills. The system improves the state-of-the-art by analyzing posture, gaze, volume, filled pauses and the slides of the presenters through data captured by very low-cost sensors. The system also provides an off-line feedback report with multimodal recordings of their performance. An initial evaluation of the system indicates that the system's feedback highly agrees with human feedback and that students considered that feedback useful to develop their oral presentation skills.
Article
Big Data refers to large and disparate volumes of data generated by people, applications and machines. It is gaining increasing attention from a variety of domains, including education. What are the challenges of engaging with Big Data research in education? This paper identifies a wide range of critical issues that researchers need to consider when working with Big Data in education. The issues identified include diversity in the conception and meaning of Big Data in education, ontological, epistemological disparity, technical challenges, ethics and privacy, digital divide and digital dividend, lack of expertise and academic development opportunities to prepare educational researchers to leverage opportunities afforded by Big Data. The goal of this paper is to raise awareness on these issues and initiate a dialogue. The paper was inspired partly by insights drawn from the literature but mostly informed by experience researching into Big Data in education.
Article
Knowledges from academic and professional research-based institutions have long been valued over the organic intellectualism of those who are most affected by educational and social inequities. In contrast, participatory action research (PAR) projects are collective investigations that rely on indigenous knowledge, combined with the desire to take individual and/or collective action. PAR with youth (YPAR) engages in rigorous research inquiries and represents a radical effort in education research to value the inquiry-based knowledge production of the youth who directly experience the educational contexts that scholars endeavor to understand. In this chapter, we outline the foundations of YPAR and examine the distinct epistemological, methodological, and pedagogical contributions of an interdisciplinary corpus of YPAR studies and scholarship. We outline the origins and disciplines of YPAR and make a case for its role in education research, discuss its contributions to the field and the tensions and possibilities of YPAR across disciplines, and close by proposing a YPAR critical-epistemological framework that centers youth and their communities, alongside practitioners, scholars, and researchers, as knowledge producers and change agents for social justice.
Article
Pro-market and business approaches to management in the public sector (new public management – NPM) have created an audit culture in schools driven by top-down, high stakes accountability, and the fetishization of data. Within this context, authentic, qualitative, and democratic forms of inquiry, both in universities and schools, become easily co-opted. I argue in this article that the use of a community-based, participatory action research (PAR) stance has the potential to disrupt NPM and open up authentic and democratic spaces in which to engage in inquiry. The goal of democratization through a PAR stance is not an attempt to return to a pre-data driven past nor to make current neoliberal reforms more palatable, but rather to create more horizontal relationships among professionals, colleges of education, public schools, and low-income communities.
Conference Paper
As higher education increasingly moves to online and digital learning spaces, we have access not only to greater volumes of student data, but also to increasingly fine-grained and nuanced data. A significant body of research and existing practice are used to convince key stakeholders within higher education of the potential of the collection, analysis and use of student data to positively impact on student experiences in these environments. Much of the recent focus in learning analytics is around predictive modeling and uses of artificial intelligence to both identify learners at risk, and to personalize interventions to increase the chance of success. In this paper we explore the moral and legal basis for the obligation to act on our analyses of student data. The obligation to act entails not only the protection of student privacy and the ethical collection, analysis and use of student data, but also, the effective allocation of resources to ensure appropriate and effective interventions to increase effective teaching and learning. The obligation to act is, however tempered by a number of factors, including inter and intra-departmental operational fragmentation and the constraints imposed by changing funding regimes. Increasingly higher education institutions allocate resources in areas that promise the greatest return. Choosing (not) to respond to the needs of specific student populations then raises questions regarding the scope and nature of the moral and legal obligation to act. There is also evidence that students who are at risk of failing often do not respond to institutional interventions to assist them. In this paper we build and expand on recent research by, for example, the LACE and EP4LA workshops to conceptually map the obligation to act which flows from both higher education’s mandate to ensure effective and appropriate teaching and learning and its fiduciary duty to provide an ethical and enabling environment for students to achieve success. We examine how the collection and analysis of student data links to both the availability of resources and the will to act and also to the obligation to act. Further, we examine how that obligation unfolds in two open distance education providers from the perspective of a key set of stakeholders – those in immediate contact with students and their learning journeys – the tutors or adjunct faculty.
Article
There are good reasons for higher education institutions to use learning analytics to risk-screen students. Institutions can use learning analytics to better predict which students are at greater risk of dropping out or failing, and use the statistics to treat ‘risky’ students differently. This paper analyses this practice using normative theories of discrimination. The analysis suggests the principal ethical concern with the differing treatment is the failure to recognize students as individuals, which may impact on students as agents. This concern is cross-examined drawing on a philosophical argument that suggests there is little or no distinctive difference between assessing individuals on group risk statistics and using more ‘individualized’ evidence. This paper applies this argument to the use of learning analytics to risk-screen students in higher education. The paper offers reasons to conclude that judgment based on group risk statistics does involve a distinctive failure in terms of assessing persons as individuals. However, instructional design offers ways to mitigate this ethical concern with respect to learning analytics. These include designing features into courses that promote greater use of effort-based factors and dynamic rather than static risk factors, and greater use of sets of statistics specific to individuals.
Article
In this position paper we contrast a Dystopian view of the future of adaptive collaborative learning support (ACLS) with a Utopian scenario that – due to better-designed technology, grounded in research – avoids the pitfalls of the Dystopian version and paints a positive picture of the practice of computer-supported collaborative learning 25 years from now. We discuss research that we see as important in working towards a Utopian future in the next 25 years. In particular, we see a need to work towards a comprehensive instructional framework building on educational theory. This framework will allow us to provide nuanced and flexible (i.e. intelligent) ACLS to collaborative learners – the type of support we sketch in our Utopian scenario.
Article
The fairy tale is arguably one of the most important cultural and social influences on children's lives. But until the first publication of Fairy Tales and the Art of Subversion, little attention had been paid to the ways in which the writers and collectors of tales used traditional forms and genres in order to shape children's lives – their behavior, values, and relationship to society.
Article
Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learning-focused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.
Conference Paper
Contrary to the fairly established notion in the learning sciences that un-scaffolded processes rarely lead to meaningful learning, this study reports a hidden efficacy of such processes and a method for extracting it. Compared to scaffolded, well-structured problem-solving groups, un-scaffolded, ill-structured problem-solving groups struggled with defining and solving the problems. Their discussions were chaotic and divergent, resulting in poor group performance. However, despite failing in their problem-solving efforts, these participants outperformed their counterparts in the well-structured condition on transfer measures, suggesting a latent productivity in the failure. The study's contrasting-case design provided participants in the un-scaffolded condition with an opportunity to contrast the ill-structured problems that they had solved in groups with the well-structured problems they solved individually afterwards. This contrast facilitated a spontaneous transfer, helping them perform significantly better on the individual ill-structured problem-solving tasks subsequently. Implications of productive failure for the development of adaptive expertise are discussed.