Figure 2 - uploaded by Lynda Thomas
Content may be subject to copyright.

Source publication
Article
Full-text available
This paper reviews the literature related to the learning and teaching of debugging computer programs. Debugging is an important skill that continues to be both difficult for novice programmers to learn and challenging for computer science educators to teach. These challenges persist despite a wealth of important research on the subject dating back...

Contexts in source publication

Context 1
... and Risinger (1987) tested whether or not debugging can be explicitly taught. Their study employed the strategy shown in Figure 2 for finding and fixing bugs when testing a program and finding it not correct. ...
Context 2
... and Risinger evaluated the impact of their technique by providing 30 minutes of explicit debugging instruction to 18 of 25 Grade 6 students, chosen at random, following 6-8 hours of programming instruction for the entire class. The debugging strategy they taught led students through a hierarchy of questions designed to facilitate bug location and correction and is presented in Figure 2. The 30 minutes debugging instruction resulted in a significant improvement in debugging skills compared with the control group, including skills on a transfer debugging task in which students debugged a real world set of instructions. ...

Similar publications

Article
Full-text available
Beginning in the early 1980s, the Computer Science Department at Carnegie Mellon University developed and used three generations of novice programming environments. The focus of these systems was to apply, advance and tune structure editor technology in support of the teaching and learning of computer programming. The use of these pedagogical syste...
Conference Paper
Full-text available
University education is facing new strategical changes that will lead to deep structural changes. Course organization is evolving and the organizational decisions have an economical impact. We propose a method to measure the present value of a pedagogical asset under a return rate. We apply the method to three courses in the Computer Science curric...
Conference Paper
Full-text available
Learning computer programming is known to be difficult for many students. In the context of a wider study, which aims to design a pedagogical strategy for introductory programming, we decided to use some less conventional activities. This strategy was applied in the last three academic years with some success. In this paper we will discuss a compon...
Chapter
Full-text available
This chapter aims to provide a general description of the preferred pedagogical approaches for the delivery and practice of computer science education based on a review of the literature. Pedagogical approaches mainly used in the teaching of computer science are unplugged activities, robotics program-ming, block-based or initial programming environ...
Conference Paper
Full-text available
Test-driven learning (TDL) is an approach to teaching computer programming that involves introducing and exploring new concepts through automated unit tests. TDL offers the potential of teaching testing for free, of improving programmer comprehension and ability, and of improving software quality both in terms of design quality and reduced defect d...

Citations

... Software testing and debugging are critical skills in software engineering, yet they are often perceived as tedious and secondary to programming by students [11,19]. Traditional teaching methods struggle to engage learners, leading to a gap between theoretical knowledge and practical application [12,16,25]. To address this challenge, gamification, and serious games have been explored as potential solutions to enhance motivation and learning outcomes in software testing education [1,4,21,29]. ...
... Motivation is crucial, as professional testers thrive on curiosity and creativity [7]. Similarly, debugging is essential but often receives little instructional focus [16,17]. Effective debugging requires domain knowledge and experience, yet novices struggle due to misconceptions, leading to errors [2,15,20]. ...
... Two of these assets are licensed from the Unity Asset Store: OneJS by DragonGround LLC 15 and the RPG Map Editor by Creative Spore. 16 Due to licensing restrictions, these assets are not included in the public repository. If a third party wishes to extend the game and build the Unity export themselves, they must obtain the appropriate licenses and place the packages in the Assets directory. ...
Preprint
Full-text available
Software testing and debugging are often seen as tedious, making them challenging to teach effectively. We present Sojourner under Sabotage, a browser-based serious game that enhances learning through interactive, narrative-driven challenges. Players act as spaceship crew members, using unit tests and debugging techniques to fix sabotaged components. Sojourner under Sabotage provides hands-on experience with the real-world testing framework JUnit, improving student engagement, test coverage, and debugging skills.
... Students often perceive these tasks as tedious and less rewarding than programming, hindering their engagement and motivation [12,24]. However, mastering these skills is critical for developing reliable software, as they form the backbone of modern software engineering practices [14,19]. Serious games, which combine educational objectives with game-based mechanics, offer a promising approach to Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. ...
... However, like testing, debugging is often underemphasized in computer science education, with minimal guidance on how to teach it effectively [19,20,22] despite its inclusion in the ACM/IEEE curriculum [27]. Novice programmers often struggle with adapting to programming languages due to misconceptions, such as assuming the language will interpret their code as intended, a challenge exacerbated by differences between natural and programming languages [2,25]. ...
... Common bugs include boundary errors, misplaced code, logical flaws, and calculation mistakes, often stemming from confusion or gaps in understanding. Addressing these issues in education can help students build better debugging skills [19]. ...
Preprint
Full-text available
Teaching software testing and debugging is a critical yet challenging task in computer science education, often hindered by low student engagement and the perceived monotony of these activities. Sojourner under Sabotage, a browser-based serious game, reimagines this learning experience by blending education with an immersive and interactive storyline. Players take on the role of a spaceship crew member, using unit testing and debugging techniques to identify and repair sabotaged components across seven progressively challenging levels. A study with 79 students demonstrates that the game is a powerful tool for enhancing motivation, engagement, and skill development. These findings underscore the transformative potential of serious games in making essential software engineering practices accessible and enjoyable.
... Further research is required to examine if the current trends towards using GenAI for editing and feedback have pedagogical merit, or if reactionary educational policies emphasizing "doing your own work" have it backwards. Concurrent research is already examining the efficacy of having students critique GenAI output [26,27,28]; our findings here encourage future investigation in this area. ...
Preprint
Generative Artificial Intelligence (GenAI) tools and models have the potential to re-shape educational needs, norms, practices, and policies in all sectors of engineering education. Empirical data, rather than anecdata and assumptions, on how engineering students have adopted GenAI is essential to developing a foundational understanding of students' GenAI-related behaviors and needs during academic training. This data will also help formulate effective responses to GenAI by both academic institutions and industrial employers. We collected two representative survey samples at the Colorado School of Mines, a small engineering-focused R-1 university in the USA, in May 2023 (n1=601n_1=601) and September 2024 (n2=862n_2=862) to address research questions related to (RQ1) how GenAI has been adopted by engineering students, including motivational and demographic factors contributing to GenAI use, (RQ2) students' ethical concerns about GenAI, and (RQ3) students' perceived benefits v.s. harms for themselves, science, and society. Analysis revealed a statistically significant rise in GenAI adoption rates from 2023 to 2024. Students predominantly leverage GenAI tools to deepen understanding, enhance work quality, and stay informed about emerging technologies. Although most students assess their own usage of GenAI as ethical and beneficial, they nonetheless expressed significant concerns regarding GenAI and its impacts on society. We collected student estimates of ``P(doom)'' and discovered a bimodal distribution. Thus, we show that the student body at Mines is polarized with respect to future impacts of GenAI on the engineering workforce and society, despite being increasingly willing to explore GenAI over time. We discuss implications of these findings for future research and for integrating GenAI in engineering education.
... Although several studies have presented students' debugging processes, they were in the context of early childhood (Bers et al., 2014), middle school (Deiner & Fraser, 2024;Jemmali et al., 2020), and high school (Kafai et al., 2020). Few studies have systematically examined students' debugging behaviors in block-based programming (Kafai et al., 2020;Kim et al., 2018;McCauley et al., 2008), with even less evidence in elementary education. The present study explored elementary students' debugging behaviors within a puzzle-based programming environment. ...
... Their findings suggested that giving them the worked example supported students' debugging processes . While the previous literature has explored students' debugging in a range of contexts, studies examining elementary students' debugging processes in block-based programming environments are still lacking (Kim et al., 2018;McCauley et al., 2008). This study closely investigated elementary students' debugging behaviors, such as their strategies and challenges, on the block-based programming platform Code.org. ...
... Several studies Jayathirtha, 2018;Katz & Anderson, 1987;Yen et al., 2012) found that students employed backward reasoning to debug self-written codes and regularly used forward reasoning for others' written codes or unfamiliar programming. Although students demonstrate the use of debugging strategies, there are notable differences in how novices and experts use these strategies (Jemmali et al., 2020;Martinez et al., 2020;McCauley et al., 2008). For example, experts use breadth-first approaches to create hypotheses, while novices use depth-first techniques (Luxton-Reilly et al., 2018). ...
Article
Full-text available
Debugging is a growing topic in K-12 computer science (CS) education research. Although some previous studies have examined debugging behaviors, only a few have focused on an in-depth analysis of elementary students’ debugging behaviors in block-based programming environments. This qualitative study explored the debugging behaviors of four students, including their strategies and challenges. The study employed thematic video analysis of students’ computer screens as they engaged in block-based programming activities. The findings reveal five types of debugging strategies and three primary challenges during the debugging process. This study aims to help researchers and educators understand elementary students’ debugging strategies and the challenges they face. Suggestions for teaching debugging strategies to elementary students and the implications for future research are discussed.
... Debugging Education. To get an overview of the teaching of debugging, we refer to McCauley et al. [24], who conducted a systematic literature review. Michaeli and Romeike [25] recently explored the influence of teaching systematic debugging concepts with an intervention study concluding that explicitly teaching debugging skills positively affects debugging self-efficacy. ...
Preprint
Full-text available
Debugging software, i.e., the localization of faults and their repair, is a main activity in software engineering. Therefore, effective and efficient debugging is one of the core skills a software engineer must develop. However, the teaching of debugging techniques is usually very limited or only taught in indirect ways, e.g., during software projects. As a result, most Computer Science (CS) students learn debugging only in an ad-hoc and unstructured way. In this work, we present our approach called Simulated Interactive Debugging that interactively guides students along the debugging process. The guidance aims to empower the students to repair their solutions and have a proper "learning" experience. We envision that such guided debugging techniques can be integrated into programming courses early in the CS education curriculum. To perform an initial evaluation, we developed a prototypical implementation using traditional fault localization techniques and large language models. Students can use features like the automated setting of breakpoints or an interactive chatbot. We designed and executed a controlled experiment that included this IDE-integrated tooling with eight undergraduate CS students. Based on the responses, we conclude that the participants liked the systematic guidance by the assisted debugger. In particular, they rated the automated setting of breakpoints as the most effective, followed by the interactive debugging and chatting, and the explanations for how breakpoints were set. In our future work, we will improve our concept and implementation, add new features, and perform more intensive user studies.
... A questo processo gli studenti, gli utenti in generale, possono partecipare a vari livelli, segnalando errori, suggerendo miglioramenti e condividendo feedback sull'output generato dall'IA (McCauley et al., 2008). ...
Article
GENERATIVE ARTIFICIAL INTELLIGENCE: RISKS AND OPPORTUNITIES IN EDUCATION. THE «COUNSELORBOT» PROJECT FOR TUTORIAL SUPPORT Abstract Artificial intelligence (AI) is progressively entering educational practices, influencing teaching and learning methodologies. Its use raises issues of various kinds – educational, methodological-didactic, and ethical – highlighting risks such as discrimination, increasing technological dependency, and the uncontrolled generation of inaccurate and hardly recognizable content due to biases in datasets. The European Union, through the AI Act, has classified the use of AI in education as «high risk», emphasizing the importance of a cautious and responsible approach. This article analyzes the opportunities offered by AI and the strategies to mitigate its risks, examines the AI Act, and presents the «Counselor-Bot» project, a concrete example of an application designed to support students in their educational and career guidance paths.
... Debugging, a key computing practice (Grover and Pea, 2013;Lodi & Martini., 2021;Shute et al., 2017), is the process of finding errors and fixing them in computing systems (McCauley et al., designed context-specific scenarios with problems in e-textile artifacts that were presented to students as projects that someone else had designed. In a way similar to Simon et al.'s (2008) instruments, but using e-textiles, we showed examples of "failure artifacts" that only partially worked. These were shared with participants over video conference as pictures along with the creators' intentions for the projects. ...
... At the same time, observing students' capacity to articulate multiple causes for potential bugs across domains is key, as this is a documented difference between novice and more experienced debuggers (Kim, 2018;Michaeli, 2020). Being able to capture students' systematic testing and the development of systematic strategies for troubleshooting is also crucial, as these are key aspects of troubleshooting that novices struggle with (Böttcher et al., 2016;McCauley et al., 2008). These aspects capture students' development in understanding of the problem space, as well as the repertoire of previous troubleshooting experiences they drew upon. ...
Preprint
Full-text available
Purpose: The purpose of this paper is to examine how a clinical interview protocol with failure artifact scenarios can capture changes in high school students' explanations of troubleshooting processes in physical computing activities. We focus on physical computing since finding and fixing hardware and software bugs is a highly contextual practice that involves multiple interconnected domains and skills. Approach: We developed and piloted a "failure artifact scenarios" clinical interview protocol. Youth were presented with buggy physical computing projects over video calls and asked for suggestions on how to fix them without having access to the actual project or its code. We applied this clinical interview protocol before and after an eight-week-long physical computing (more specifically, electronic textiles) unit. We analyzed matching pre- and post-interviews from 18 students at four different schools. Findings: Our findings demonstrate how the protocol can capture change in students' thinking about troubleshooting by eliciting students' explanations of specificity of domain knowledge of problems, multimodality of physical computing, iterative testing of failure artifact scenarios, and concreteness of troubleshooting and problem solving processes. Originality: Beyond tests and surveys used to assess debugging, which traditionally focus on correctness or student beliefs, our "failure artifact scenarios" clinical interview protocol reveals student troubleshooting-related thinking processes when encountering buggy projects. As an assessment tool, it may be useful to evaluate the change and development of students' abilities over time.
... The underlying error of documenting the logical switch but setting it incorrectly, is a classic mistake. According to McCauley et al. (2008), we classify it as a "blunder or botch", which they define as "a mental typo; you know what you wanted to 245 write, but you wrote something else" (Knuth, 1989;McCauley et al., 2008). In the taxonomy developed by Avizienis (2004), our bug is a software flaw that occurs during development, is internal to the system, is human-made, affects the software, is non-malicious, non-deliberate, is introduced accidentally, and is permanent. ...
Preprint
Full-text available
Climate models are not just numerical representations of scientific knowledge, they are also human-written software programs. As such, they contain coding mistakes, which may look mundane, but can affect the results of interconnected and complex models in unforeseen ways. These bugs are underacknowledged in the climate science community. We describe a sea ice bug in the coupled atmosphere-ocean-sea ice model ICON and its history. The bug was caused by a logical flag that was set incorrectly, such that the ocean did not experience friction from sea ice and thus the surface velocity did not slow down, especially in the presence of ocean eddies. While describing the bug and its effects, we also give an example of visual and concise bug communication. In addition, we conceptualize this bug as representing a novel species of resolution-dependent bugs. These are long-standing bugs that are discovered during the transition to high-resolution climate models due to features that are resolved at the kilometer scale. This case study serves to illustrate the value of open documentation of bugs in climate models and to encourage our community to adopt a similar approach.
... Previous research has identified strategies in block-based environments, such as the leveraging of environmental data tools to evaluate computational models, multiple reviews of output and code, and forward reasoning i.e. examining the program line by line Kim, et al. 2018;McCauley, 2008). For example, Hutchins, et al (2021) identified the following strategies students employed when building computational models of scientific processes using a block-based environment: (1) Depth-First, i.e., multiple code construction actions without assessment actions, can corresponds to a lack of insight for breaking down a complex task into its subparts (Grover & Pea, 2018); (2) Tinkering, i.e., trying small changes in the blocks making up the executable model, can be used to gain some understanding of code prior to making changes; (3) Multi-Visual Feedback, represented by sequence of simulation executions, can represent a lack of understanding if the simulations were run in rapid succession; and (4) Simulation-based Assessment, which typically involves using tools, such as plots, to understand and analyze model behavior, and in past work has been observed to represent a decomposition process, i.e., a build and test behavior (Basu, et al, 2017;Hutchins, et al, 2021). ...
Article
The incorporation of technology into primary and secondary education has facilitated the creation of curricula that utilize computational tools for problem-solving. In Open-Ended Learning Environments (OELEs), students participate in learning-by- modeling activities that enhance their understanding of (Science, technology, engineering, and mathematics) STEM and computational concepts. This research presents an innovative multimodal emotion recognition approach that analyzes facial expressions and speech data to identify pertinent learning-centered emotions, such as engagement, delight, confusion, frustration, and boredom. Utilizing sophisticated machine learning algorithms, including High-Speed Face Emotion Recognition (HSEmotion) model for visual data and wav2vec 2.0 for auditory data, our method is refined with a modality verification step and a fusion layer for accurate emotion classification. The multimodal technique significantly increases emotion detection accuracy, with an overall accuracy of 87%, and an Fl -score of 84%. The study also correlates these emotions with model building strategies in collaborative settings, with statistical analyses indicating distinct emotional patterns associated with effective and ineffective strategy use for tasks model construction and debugging tasks. These findings underscore the role of adaptive learning environments in fostering students' emotional and cognitive development.
... Debugging is an essential skill for programming, yet there is little consistency in how it is taught [41]. A landmark review by McCauley et al. covered various educational perspectives on debugging, highlighting that it is both difficult for novices to learn and challenging for computer science educators to teach [25]. Similar work has explored common difficulties faced by students when learning debugging [12]. ...
... Despite this, both learning and teaching debugging remain challenging [41], thus novel frameworks for teaching debugging are necessary [21]. Notably, the process of fixing the bug when its location is known is easily carried out [12], but locating the bug and understanding the functionality of the buggy code are considered difficult tasks [25]. Our work focuses on automatically generating buggy codes for which students design failing test cases, practicing their skills for problem understanding, bug localization, and code comprehension. ...
Preprint
Debugging is an essential skill when learning to program, yet its instruction and emphasis often vary widely across introductory courses. In the era of code-generating large language models (LLMs), the ability for students to reason about code and identify errors is increasingly important. However, students frequently resort to trial-and-error methods to resolve bugs without fully understanding the underlying issues. Developing the ability to identify and hypothesize the cause of bugs is crucial but can be time-consuming to teach effectively through traditional means. This paper introduces BugSpotter, an innovative tool that leverages an LLM to generate buggy code from a problem description and verify the synthesized bugs via a test suite. Students interact with BugSpotter by designing failing test cases, where the buggy code's output differs from the expected result as defined by the problem specification. This not only provides opportunities for students to enhance their debugging skills, but also to practice reading and understanding problem specifications. We deployed BugSpotter in a large classroom setting and compared the debugging exercises it generated to exercises hand-crafted by an instructor for the same problems. We found that the LLM-generated exercises produced by BugSpotter varied in difficulty and were well-matched to the problem specifications. Importantly, the LLM-generated exercises were comparable to those manually created by instructors with respect to student performance, suggesting that BugSpotter could be an effective and efficient aid for learning debugging.