Lab
GSIC-EMIC research group
Institution: University of Valladolid
Featured research (16)
Developing ethical reasoning as a competence is gaining relevance in higher education settings influenced by the current demands of society. One approach to achieve this competence is to propose realistic ethical dilemmas to the students. Nevertheless, educators need help in integrating ethics education effectively in higher education due to curricular constraints and a lack of supportive structures. EthicApp is a social platform that aims to support teachers in designing and enacting learning scenarios to foster ethical reasoning. However, the introduction of a new tool such as EthicApp may have undesired consequences for teachers’ agency. The notion of teacher agency is complex and not sufficiently studied in Technology Enhanced Learning contexts. For this reason, we propose to study the implications of the use of EthicApp for teacher agency in the light of the metaphor of orchestration. The metaphor of orchestration in TEL offers a holistic approach to studying how teachers integrate technologies into their practice. This paper presents preliminary findings from a case study in which three higher education teachers orchestrated learning designs supported by EthicApp. Early findings indicate that, although managing the learning scenarios in real-time was perceived as demanding, EthicApp empowered the participant teachers to design innovative learning scenarios, raise awareness, and inform the adaptation of the learning scenarios.
Generative artificial intelligence (GenAI) tools, such as large language models (LLMs), generate natural language and other types of content to perform a wide range of tasks. This represents a significant technological advancement that poses opportunities and challenges to educational research and practice. This commentary brings together contributions from nine experts working in the intersection of learning and technology and presents critical reflections on the opportunities, challenges, and implications related to GenAI technologies in the context of education. In the commentary, it is acknowledged that GenAI’s capabilities can enhance some teaching and learning practices, such as learning design, regulation of learning, automated content, feedback, and assessment. Nevertheless, we also highlight its limitations, potential disruptions, ethical consequences, and potential misuses. The identified avenues for further research include the development of new insights into the roles human experts can play, strong and continuous evidence, human-centric design of technology, necessary policy, and support and competence mechanisms. Overall, we concur with the general skeptical optimism about the use of GenAI tools such as LLMs in education. Moreover, we highlight the danger of hastily adopting GenAI tools in education without deep consideration of the efficacy, ecosystem-level implications, ethics, and pedagogical soundness of such practices.
The recent Covid-19 pandemic made universities rethink their traditional educational models, shifting, in some cases, to pure online or hybrid models. Hybrid settings usually involve onsite (i.e., in the classroom) and online (e.g., in a different classroom, at home) students simultaneously under the instruction of the same teacher. However, while these models provide more flexibility to students, hybridity poses additional challenges for the specific case of collaborative learning, likely increasing the teachers' orchestration load and potentially hampering fruitful interactions among learners. In order to gather empirical evidence on the impact of hybridity in collaborative learning, this paper reports a study conducted in a hybrid classroom where a Jigsaw collaborative pattern was implemented with the Engageli software. The study involved 2 teachers and 67 students enrolled in a computer science undergraduate course. Teachers' post-interviews, questionnaires and an epistemic network analysis (ENA) were used to produce study findings. Results show that teachers reported a medium-to-high orchestration load for implementing and setting up the collaborative activities in the hybrid classroom. Among the factors that contributed most to such load, teachers highlighted the creation and live management of groups and collaborative documents. Additionally, the ENA showed that teachers put much effort on monitoring group interactions and solving technical issues. Finally, we observed relevant differences on students' perceptions (e.g., satisfaction with the attention received by the teachers) based on the cohort sizes and on the students’ attendance modality (onsite vs. online).
Feedback plays an integral role in the learning process serving as a vital component for improvement and motivation. Typically, teachers provide feedback by considering various factors, such as the specific course context, the timing, and learners’ needs. However, when designing feedback in settings like Massive Open Online Courses (MOOCs), challenges arise due to the lack of the direct learner–teacher interaction, and the diverse and large learner population. In such cases, learning analytics (LA) emerges as a valuable solution for scaling up feedback by offering insights into learner progress and facilitating automatic or semi-automatic tailored interventions. Nevertheless, existing literature highlights a lack of pedagogical and contextual grounding on teacher-led LA-based proposals, as well as insufficient guidance for teachers on effectively using LA indicators to create suitable interventions. This chapter discusses the importance of feedback in MOOCs and highlights different key aspects for the design of effective feedback. Furthermore, it introduces FeeD4Mi, a conceptual framework, developed following a human-centred approach. FeeD4Mi enables scalable, contextualised, and personalised interventions rooted in pedagogical theories to enhance feedback effectiveness. Additionally, this chapter presents an illustrative scenario regarding the FeeD4Mi application within an MOOC case. We envision that this research fosters participatory approaches for designing, delivering, and evaluating LA-informed feedback interventions in authentic educational settings.
Design is a highly creative and challenging task and research has already explored possible ways for using conversational agents (CAs) to support humans participating in co-design sessions. However, research reports that a) humans in these sessions expect more essential support from CAs, and b) it is important to develop CAs that continually learn from communication -like humans do- and not simply from labeled datasets. Addressing the above needs, this paper explores the specific question of how to extract useful knowledge from human dialogues observed during co-design sessions and make this knowledge available through a CA supporting humans in similar design activities. In our approach we explore the potential of the GPT-3 Large Language Model (LLM) to provide useful output extracted from unstructured data such as free dialogues. We provide evidence that by implementing an appropriate “extraction task” on the LLM it is possible to efficiently (and without human-in-the-loop) extract knowledge that can then be embedded in the cognitive base of a CA. We identify at least four major steps/assumptions in this process that need to be further researched, namely: A1) Knowledge modeling, A2) Extraction task, A3) LLM-based facilitation, and A4) Humans’ benefit. We provide demonstrations of the extraction and facilitation steps using the GPT-3 model and we also identify and comment on various worth exploring open research questions.KeywordsConversational agentLarge language model (LLM)Design thinking