ArticlePDF Available

Abstract

During the recent years computational thinking has been actively promoted through the K-12 curriculum, higher education, contests, and many other initiatives. Computational thinking skills are important for a further students' educational and professional career. Our focus is on computational thinking for software engineering novice students, a term meant to encompass a set of concepts and thought processes that are helpful in formulating problems and their solutions. Annually organized international challenge on Informatics and Computational Thinking ''Bebras'' has developed many tasks to promote deep thinking skills in this area. It is important to motivate students to solve various informatics or computer science tasks and evaluate their computational thinking abilities. The paper presents a study conducted among first-year students of software engineering, studying the structured programming course. As aninstrument to measure computational thinking, a test of internationally approved and well-preselected tasks from the ''Bebras'' challenge has been suggested and validated. The correlation between the students' test results and the structured programming course results has been investigated. We conclude with a discussion and future directions to enhance computational thinking skills of novice software engineering students.
International Journal of Engineering Education Vol. 32, No. 3(A), pp. 110, 2016
Printed in Great Britain
0949-149X/91 $3.00+0.00
© 2016 TEMPUS Publications.
Exploration of Computational Thinking of Software
Engineering Novice Students Based on Solving
Computer Science Tasks
VLADIMIRAS DOLGOPOLOVAS
Vilnius University Institute of Mathematics and Informatics, 4 Akademijos Street, Vilnius LT-08663, Lithuania and Faculty
of Electronics and Informatics, University of Applied Sciences, 15 J. Jasinskio Street, LT-01111, Vilnius, Lithuania.
E-mail: vladimiras.dolgopolovas@mii.vu.lt
TATJANA JEVSIKOVA and VALENTINA DAGIENE˙
Vilnius University Institute of Mathematics and Informatics, 4 Akademijos Street, Vilnius LT-08663, Lithuania.
E-mail: tatjana.jevsikova@mii.vu.lt, valentina.dagiene@mii.vu.lt
LORETA SAVULIONIENE˙
Faculty of Electronics and Informatics, University of Applied Sciences, 15 J. Jasinskio Street, LT-01111, Vilnius,
Lithuania.
E-mail: l.savulioniene@eif.viko.lt
During the recent years computational thinking has been actively promoted through the K-12
curriculum, higher education, contests, and many other initiatives. Computational thinking skills are
important for a further students educational and professional career. Our focus is on computational
thinking for software engineering novice students, a term meant to encompass a set of concepts
and thought processes that are helpful in formulating problems and their solutions. Annually
organized international challenge on Informatics and Computational Thinking ‘‘Bebras’’ has
developed many tasks to promote deep thinking skills in this area. It is important to motivate
students to solve various informatics or computer science tasks and evaluate their computational
thinking abilities. The paper presents a study conducted among first-year students of software
engineering, studying the structured programming course. As aninstrument to measure
computational thinking, a test of internationally approved and well-preselected tasks from the
‘‘Bebras’’ challenge has been suggested and validated. The correlation between the students test
results and the structured programming course results has been investigated. We conclude with a
discussion and future directions to enhance computational thinking skills of novice software
engineering students.
Keywords: computational thinking; Bebras challenge; computer science concepts; computer engineering
education; contest; novice programming students; novice software engineering students
... Tests to evaluate CT in adults presented in the literature generally lack evidence on important aspects of validity, especially regarding the internal structure and the test content. An interesting attempt by Dolgopolovas et al. (2016) was a test developed using 10 Bebras tasks that do not require knowledge about computer science and that assessed, according to the authors: abstraction, decomposition, algorithmic thinking, evaluation, and generalization. The items allegedly also measured different concepts: binary tree modeling, depth-first-search, and automatization. ...
... The authors provided items and test information curves, and reported a non-significant correlation between the test scores and course grades on programming by software engineering students. The authors stated that the test measured a homogeneous ability of CT (Dolgopolovas et al., 2016), but they did not examine whether the items' categorization in terms of components and contents was reflected by the test dimensionality, and they did not provide evidence of how and if subjects with a background in science, technology, engineering, and mathematics (STEM) and subjects from non-STEM fields were discriminated by the test. Gouws et al. (2013) created a test to measure the CT performance of highereducation students. ...
... This is justified by the fact that the experts' judgments showed that they mainly chose algorithmic thinking and algorithms as first categories in both systems, and only secondarily they chose other categories (see Table 5). Furthermore, previous CT test validations suggest a hierarchically superior dimension Román-González et al. (2017), and some previous attempts allegedly evaluated one single homogeneous dimension (Dolgopolovas et al., 2016). ...
Article
This study describes the development and validation process of a computational thinking (CT) test for adults. The team designed a set of items and explored a subset of those through two qualitative pilots. Then, in order to provide validity evidence based on the test content, a team of 11 subject-matter experts coded the initial pool of items using two different systems of categories based on CT components and contents. Next, the items were piloted on a sample of 289 participants, 137 experts in CT, and 152 novices. After a series of confirmatory factor analyses, a unidimensional model that represents algorithmic thinking was adopted. Further analyzing the psychometric quality of the 27 items, 20 of them with excellent reliability indices were selected for the test. Thus, this study provides a tool to evaluate adults’ CT: the Algorithmic Thinking Test for Adults (ATTA), which was developed according to psychometric standards. This article also reflects on the nature of CT as a construct.
... However, the tests focused on informatics (i.e., information, computing, and data processing; Bilbao et al., 2014), which tended to be peripheral elements to CT (Román-González et al., 2017a). Additionally, although the tests have been widely applied in research studies (e.g., Bell et al., 2011;Chiazzese et al., 2019;Dolgopolovas et al., 2016), information about the test validation processes was limited (Dagiene & Stupuriene, 2016). More recently, the Interactive Assessment of CT (IACT) was developed for 3rd-8th graders to assess CT in a game-based learning context (Rowe et al., 2021). ...
Article
Computational thinking (CT) has permeated primary and early childhood education in recent years. Despite the extensive effort in CT learning initiatives, few age-appropriate assessment tools targeting young children have been developed. In this study, we proposed Computational Thinking Test for Lower Primary (CTtLP), which was designed for lower primary school students (aged 6–10). Based on the evidence-centred design approach, a set of constructed-response items that are independent of programming platforms was developed. To validate the test, content validation was first performed via expert review and cognitive interviews, and refinements were made based on the comments. Then, a large-scale field test was administered with a sample of 1st–3rd graders (N = 1225), and the data was used for psychometric analysis based on both classical test theory (CTT) and item response theory (IRT). The CTT results provided robust criterion validity, internal consistency, and test–retest reliability values. Regarding IRT results, a three-parameter logistic model was selected according to the item fit indices, based on which fair item parameters and test information reliability were generated. Overall, the test items and the whole scale showed proper fit, suggesting that CTtLP was a suitable test for the target group. Analyses of the test performance were then put forward. Results reported that students’ performance improved with grade level, and no gender difference was detected. Based on the test responses, we also identified children’s challenges in understanding CT constructs, indicating that students tended to have difficulty in understanding loop control and executing multiple directions. The study provides a rigorously validated diagnostic test for measuring CT acquisition in lower primary school students and demonstrates a replicable design and validation process for future assessment practices, and findings on the difficulties children faced in CT conceptual understanding could shed light on CT primary and early childhood education.
... Therefore, the main skill is the ability to integrate different type of knowledge. Thus, the assessment can be based on an assessment of the improvement in the skills and cognitive abilities of students, including originality, non-traditional thinking skills, computational thinking skills (Dolgopolovas et al., 2016;Juškevičiene & Dagiene, 2018;Wing, 2008), critical thinking skills (Horvath & Forte, 2011;Stein et al., 2007), and problem solving ability (Mitrevski, 2019), the ability to synthesize and evaluate new information. ...
Preprint
Full-text available
p>The goal of this research is to develop a comprehensive and holistic framework focused on fostering students’ scientific inquiry and research activities through scientific computing education by implementing computational experiments and simulations. The main scientific contribution of this article is a systematic approach to interdisciplinary and integrated curricula for Science, Technology, Engineering and Mathematics (STEM) education. The key interest is how to develop and apply appropriate learning resources that include software learning objects. In order to implement the goal of the research, we consistently study the problematic of developing the Technological, Pedagogical and Content Knowledge (TPACK) model for the scientific computing education domain and provide a methodology for the development and integration of relevant educational resources into the educational process. Based on this, we provide a generalized model of scientific computing education focused on scientific inquiry and research activities of students.</p
... Answer (the correct answer is written in bold). Source: Dolgopolovas et al. (2016). ...
... 2. How do existing cognitive and philosophical theories provide support for the design of such educational environment? We need to provide solutions for navigation and motivation of students, including different educational environments for schoolchildren (Dagienė and Sentance 2016), as well as evaluation of the acquisition of such skills (Dolgopolovas et al. 2016). ...
Article
Full-text available
[full article, abstract in English; abstract in Lithuanian] The article examines the modern computer-based educational environment and the requirements of the possible cognitive interface that enables the learner’s cognitive grounding by incorporating abductive reasoning into the educational process. Although the main emphasis is on cognitive and physiological aspects, the practical tools for enabling computational thinking in a modern constructionist educational environment are discussed. The presented analytical material and developed solutions are aimed at education with computers. However, the proposed solutions can be generalized in order to create a computer-free educational environment. The generalized paradigm here is pragmatism, considered as a philosophical assumption. By designing and creating a pragmatist educational environment, a common way of organizing computational thinking that enables constructionist educational solutions can be found.
... 2. How do existing cognitive and philosophical theories provide support for the design of such educational environment? We need to provide solutions for navigation and motivation of students, including different educational environments for schoolchildren (Dagienė and Sentance 2016), as well as evaluation of the acquisition of such skills (Dolgopolovas et al. 2016). ...
Conference Paper
The paper examines the modern computer-based educational environment and the requirements of the possible cognitive interface that enables the learner’s cognitive grounding by incorporating abductive reasoning into the educational process. Although the main emphasis is on cognitive and physiological aspects, the practical tools for enabling computational thinking in a modern constructionist educational environment are discussed. The presented analytical material and developed solutions are aimed at education with computers. However, the proposed solutions can be generalized in order to create a computer-free educational environment. The generalized paradigm here is pragmatism, considered as a philosophical assumption. By designing and creating a pragmatist educational environment, a common way of organizing computational thinking that enables constructionist educational solutions can be found.
Chapter
The relevance of computational thinking as a skill for today's learners is no longer in question, but every skill needs an assessment system. In this study, we analyze two validated instruments for assessing computational thinking - the CTt (Computational Thinking Test) and the CTS (Computational Thinking Scale). The study involved 49 students in grades 8 and 9 (age 14–16). Prior to the study, students in both grades were taught computational thinking differently. One group learned computational thinking by completing tasks and creating projects in Scratch, the other group learned by completing tasks in “Minecraft: Education Edition”. The students were asked to take the CTt and CTS tests. The nature of these tests is different, one is computational thinking diagnostic tool, the other is a psychometric self-assessment test consisting of core abilities (subconstructs) important for computational thinking. The aim of this study was to determine how these tests related to each other and whether students’ gender and the different tools chosen to teach computational thinking had an impact on the level of computational thinking knowledge and abilities acquired based on the tests. The results have shown that the scores of the two tests correlated with each other only for male students’ subgroup. For a whole group CTt scores correlated only with CTS algorithmic thinking subconstruct. The results have also shown that teaching tools do have an impact on the acquisition of different computational thinking concepts skills: students taught with different tools had different test results. This study provides useful implications on computational thinking teaching improvement and its assessment better understanding.KeywordsComputational thinkingAssessmentComputational thinking assessment instrumentsCTtCTSGender differencesLearning tools
Chapter
Towards the establishment of an evaluation platform for computational thinking (CT), in this paper, we use the “Bebras Challenge” coined by Dr. Dagienė as a measurement tool of CT skills. This paper presents a “triangle examination” which includes three kinds of testing methods (programming testing, traditional paper testing, and Bebras Challenges). Approximately one hundred and fifty non-computer science (CS) undergraduate students participated in the examination as a part of an introductory programming course. The result indicated a weak but positive correlation (.38–.45) between the three methods. Additional qualitative analysis for each task in Bebras showed that requirements of algorithm creation and interpretation, and explicitness of the description, are two critical factors to determine a high correlation between other testing methods. We conclude our research by showing a clear correlation between the Bebras Challenge and actual programming. KeywordsProgrammingLiteracyComputational ThinkingBebras Challenge
ResearchGate has not been able to resolve any references for this publication.