Figure 1 - uploaded by Wengran Wang
Content may be subject to copyright.
An example step given by the Step Tutor, which includes a Code Comparison Panel, two Click-and-Run Buttons, and a Self-Explanation Prompt.

An example step given by the Step Tutor, which includes a Code Comparison Panel, two Click-and-Run Buttons, and a Self-Explanation Prompt.

Source publication
Conference Paper
Full-text available
Students often get stuck when programming independently, and need help to progress. Existing, automated feedback can help students progress, but it is unclear whether it ultimately leads to learning. We present Step Tutor, which helps struggling students during programming by presenting them with relevant, step-by-step examples. The goal of Step Tu...

Contexts in source publication

Context 1
... flashes every 90 seconds, inspired by prior work [33], which suggests that students can become too engaged in solving a challenging problem to notice or act on their own need for help [38]. When the student clicks on the button, she sees the Step Tutor feedback window (Figure 1), which shows a carefully-selected example step (explained in Section 3.2). The feedback window guides the student through learning the step in three ways, designed to promote deliberate comparison and selfreflection, to help students learn the step, and learn how to apply it again in the future [44]: Comparing and running the code: At the top of the feedback window, the student sees two code snapshots, which together give a meaningful, interpretable step that a student can take to proceed towards the solution, selected by the example selection algorithm in Section 3.2. ...
Context 2
... help students achieve optimal learning, the example step should be one that the student has not completed but is ready to complete with some help -meaning one in the student's Zone of Proximal Development [49]. Our approach extends our prior work [51], and consists of the following steps: An instructor creates a database of example steps: An instructor can first create a set of example steps for a problem, each consisting of "before" and "after" code (as shown in Figure 1), representing one meaningful step in completing the solution which ideally alters the program's output. In many problems, solution steps can be completed in various orders [52]. ...
Context 3
... flashes every 90 seconds, inspired by prior work [33], which suggests that students can become too engaged in solving a challenging problem to notice or act on their own need for help [38]. When the student clicks on the button, she sees the Step Tutor feedback window (Figure 1), which shows a carefully-selected example step (explained in Section 3.2). The feedback window guides the student through learning the step in three ways, designed to promote deliberate comparison and selfreflection, to help students learn the step, and learn how to apply it again in the future [44]: Comparing and running the code: At the top of the feedback window, the student sees two code snapshots, which together give a meaningful, interpretable step that a student can take to proceed towards the solution, selected by the example selection algorithm in Section refalgorithm. ...
Context 4
... help students achieve optimal learning, the example step should be one that the student has not completed but is ready to complete with some help -meaning one in the student's Zone of Proximal Development [49]. Our approach extends our prior work [51], and consists of the following steps: An instructor creates a database of example steps: An instructor can first create a set of example steps for a problem, each consisting of "before" and "after" code (as shown in Figure 1), representing one meaningful step in completing the solution which ideally alters the program's output. In many problems, solution steps can be completed in various orders [52]. ...

Similar publications

Research
Full-text available
This research seeks to reflect on feedback as a powerful strategy when the goal is the practice of a formative evaluation. In this case, we use the quiz question to support student learning in relation to the mathematical knowledge developed in class. To ensure this support, we use quality feedback, i.e. we try to write something that reinforces th...

Citations

... (1) An in-class deployment and evaluation of LLM-generated code explanations (2) An in-class comparison of multiple code explanation types 2 RELATED WORK 2.1 Pedagogical methods for explaining code The ability to write and modify code, understand its purpose, and articulate its functionality are key skills that CS students must develop [7,26,42]. Explanations benefit students in multiple ways. Explanations help students to make connections and to develop their own understanding of how a code snippet executes [25]. ...
Preprint
Advances in natural language processing have resulted in large language models (LLMs) that are capable of generating understandable and sensible written text. Recent versions of these models, such as OpenAI Codex and GPT-3, can generate code and code explanations. However, it is unclear whether and how students might engage with such explanations. In this paper, we report on our experiences generating multiple code explanation types using LLMs and integrating them into an interactive e-book on web software development. We modified the e-book to make LLM-generated code explanations accessible through buttons next to code snippets in the materials, which allowed us to track the use of the explanations as well as to ask for feedback on their utility. Three different types of explanations were available for students for each explainable code snippet; a line-by-line explanation, a list of important concepts, and a high-level summary of the code. Our preliminary results show that all varieties of explanations were viewed by students and that the majority of students perceived the code explanations as helpful to them. However, student engagement appeared to vary by code snippet complexity, explanation type, and code snippet length. Drawing on our experiences, we discuss future directions for integrating explanations generated by LLMs into existing computer science classrooms.
... Programming classes are also becoming more popular in high and middle schools. The need to provide better support for the increasing fraction of less-prepared students in CS classes motivated a number of recent attempts to create ITS supporting step-by-step problem support for CS classes [16,28,30,37]. The step-by-step approach, however, does have a few shortcomings, as having students complete each step can slow down productivity and make the process boring for better-prepared students. ...
... Likewise, selecting a mode like worked-example could lower cognitive load for less experienced students. Such an adaptive design would be similar to step-based help recently proposed in program construction tasks [37]. Additionally, interleaving worked examples and problems might make students more efficient and decrease student frustration as observed in prior research [24,25]. ...
... In this paper, we evaluate the effects of feedback that is generated automatically by static analysis. Past research on automated hint generation has mainly considered the problem of providing hints on what should be the next step in solving programming assignments [20,23,25,30,32] or open ended programming tasks [19,21] and how novices seek help in these systems [1,15,16,22]. An exception is the work by Gusukuma et al. [6], who showed that feedback delivery on mistakes that anticipate possible misconceptions generally leads to favourable results, and that showing such hints does not harm transfer to new tasks. ...
Preprint
Bugs in learners' programs are often the result of fundamental misconceptions. Teachers frequently face the challenge of first having to understand such bugs, and then suggest ways to fix them. In order to enable teachers to do so effectively and efficiently, it is desirable to support them in recognising and fixing bugs. Misconceptions often lead to recurring patterns of similar bugs, enabling automated tools to provide this support in terms of hints on occurrences of common bug patterns. In this paper, we investigate to what extent the hints improve the effectiveness and efficiency of teachers in debugging learners' programs using a cohort of 163 primary school teachers in training, tasked to correct buggy Scratch programs, with and without hints on bug patterns. Our experiment suggests that automatically generated hints can reduce the effort of finding and fixing bugs from 8.66 to 5.24 minutes, while increasing the effectiveness by 34% more correct solutions. While this improvement is convincing, arguably teachers in training might first need to learn debugging "the hard way" to not miss the opportunity to learn by relying on tools. We therefore investigate whether the use of hints during training affects their ability to recognise and fix bugs without hints. Our experiment provides no significant evidence that either learning to debug with hints or learning to debug "the hard way" leads to better learning effects. Overall, this suggests that bug patterns might be a useful concept to include in the curriculum for teachers in training, while tool-support to recognise these patterns is desirable for teachers in practice.
... They found that these worked examples helped students solve more problems in fixed time [39]. Further study revealed how students used on-demand worked examples in five different ways to help them learn and make progress [37]. Next-step hints is a type of intervention that tells struggling students what the next actions are (e.g., insertion, deletion, replacement) to help them move closer to a correct solution [28]. ...
... For example, some students found next-step hints to be too specific and difficult to interpret without further explanation [22]. Also, it is challenging to create good worked examples that help students move forward without revealing too much information of the correct solution [37]. Moreover, as mentioned above, novices may not be able to use on-demand support effectively [19,30]. ...
... Findings: This intervention reason is similar to Ko's understanding barrier [12], which refers to the students thought they knew how to do this, but it didn't do what they expected. Giving proactive support for these errors without careful consideration can be frowned upon for robbing students' opportunities to gain experiences in how to debug [37]. However, we believe that it warrants intervention when a student focuses on debugging in the wrong area of their code. ...
... Researchers have developed systems to support novices' use of code examples during programming. Many were built for closed-ended tasks [45,46]. For example, by offering step-by-step examples with options to immediately run the example code [45]. ...
... Many were built for closed-ended tasks [45,46]. For example, by offering step-by-step examples with options to immediately run the example code [45]. Some offers an online database of annotated examples [6]. ...
... The student may click on different sprites to look at the example code for each sprite (shown in Figure 1). They may also look at the animation of the output next to the example code, since reading code in relation to output has been shown to trigger students to reflect on how the example code works [45]. The student can also click on the "Open the Project" button to view the example in a separate window and experiment with it. ...
... Researchers have developed systems to support novices' use of code examples during programming. Many were built for closed-ended tasks [45,46]. For example, by offering step-by-step examples with options to immediately run the example code [45]. ...
... Many were built for closed-ended tasks [45,46]. For example, by offering step-by-step examples with options to immediately run the example code [45]. Some offers an online database of annotated examples [6]. ...
... The student may click on different sprites to look at the example code for each sprite (shown in Figure 1). They may also look at the animation of the output next to the example code, since reading code in relation to output has been shown to trigger students to reflect on how the example code works [45]. The student can also click on the "Open the Project" button to view the example in a separate window and experiment with it. ...
Preprint
Full-text available
Open-ended programming increases students' motivation by allowing them to solve authentic problems and connect programming to their own interests. However, such open-ended projects are also challenging, as they often encourage students to explore new programming features and attempt tasks that they have not learned before. Code examples are effective learning materials for students and are well-suited to supporting open-ended programming. However, there is little work to understand how novices learn with examples during open-ended programming, and few real-world deployments of such tools. In this paper, we explore novices' learning barriers when interacting with code examples during open-ended programming. We deployed Example Helper, a tool that offers galleries of code examples to search and use, with 44 novice students in an introductory programming classroom, working on an open-ended project in Snap. We found three high-level barriers that novices encountered when using examples: decision, search and integration barriers. We discuss how these barriers arise and design opportunities to address them.
Article
In this study, an interactive programming learning environment was built with two types of error prompt functions: 1) the key prompt and 2) step-by-step prompt. A quasi-experimental study was conducted for five weeks, in which 75 sixth grade students from disadvantaged learning environments in Taipei, Taiwan, were divided into three groups: 1) the key prompt group, 2) the step-by-step prompt group, and 3) the group in which teachers explained the errors. The aim was to investigate the effects of different interactive methods on students’ learning in programming. The results showed that the step-by-step prompt group had better learning effectiveness, self-efficacy, and problem-solving ability than the key prompt group and the group with the teacher explaining the errors. Because the students from the disadvantaged learning environment were new to the graphical program, they had a low level of understanding of the program. The interactive learning environment with step-by-step prompt, as opposed to the key prompt, made it easier for disadvantaged students to understand the problems and think about their solutions.