To read the full-text of this research, you can request a copy directly from the authors.
Abstract
Puzzles are the basic building block of Code Hunt contests. Creating puzzles and choosing suitable puzzles from the puzzle bank turns out to be a complex operation requiring skill and experience. Constructing a varied and interesting mix of puzzles is based on several factors. The major factor is the difficulty of the puzzle, so that the contest can build up from easier puzzles to more difficult ones. For a successful and fun contest aimed at the expected abilities of the contestants, other factors include the language features needed to solve the puzzle, clues to provide when the puzzle is presented to the player, and test cases to seed into the Code Hunt engine. We describe our experience with contest construction over a period of year and provide guidelines for choosing and making adjustments to the puzzles so that a Code Hunt contest will provide a satisfying trouble-free experience for the contestants.
As part of formative and summative assessments in programming courses, students work on developing programming artifacts following a given specification. These artifacts are evaluated by the teachers. At the end of this evaluation, the students receive feedback and marks. Providing feedback on programming artifacts is time demanding and could make feedback to arrive too late for it to be effective for the students' learning. We propose to combine software testing with peer feedback which has been praised for offering a timely and effective learning activity with program testing. In this paper we report on the development of a Web platform for peer feedback on programming artifacts through program testing. We discuss the development process of our peer-testing platform informed by teachers and students.
Promoting computer science though programming is widespread all around the world. However, there are not always enough human resources to support trainings and teaching of programming. At the same time, online programming contests have also spread and are getting accessible to people at large. This paper is about how it is possible to use online programming contests to build trainings and to support the teaching of programming. The paper first reviews how programming contests can be classified. It then proposes classification criteria and applies them to a selection of existing online programming contests. Based on that classification criteria and review, the paper discusses how such contests can be used to build programming trainings and also to support teaching. Finally, the paper presents an online platform that allows people to create a contestant profile to compare them to other users of the platform and to discuss about the contests they took part in. All this work aims at increasing the motivation of people when learning to program and at promoting computer science among young people, with limited human resources and using online social connections between people.
Pex automatically produces a small test suite with high code coverage for a .NET program. To this end, Pex performs a systematic program analysis (using dynamic symbolic execution, similar to path- bounded model-checking) to determine test inputs for Parameterized Unit Tests. Pex learns the program behavior by monitoring execution traces. Pex uses a constraint solver to produce new test inputs which exercise different program behavior. The result is an automatically gen- erated small test suite which often achieves high code coverage. In one case study, we applied Pex to a core component of the .NET runtime which had already been extensively tested over several years. Pex found errors, including a serious issue.
URL: http://en.wikipedia.org
Collatz Conjecture
Wikipedia
Compiler) William B. Poucher (Foreword) 2010 From Baylor to Baylor lulu