Conference PaperPDF Available

Students’ Perceptions of Paper-Based vs. Computer-Based Testing in an Introductory Programming Course

Authors:
Students’ Perceptions of Paper-Based vs. Computer-Based Testing
in an Introductory Programming Course
Jo˜
ao Paulo Barros
Polytechnic Institute of Beja, Beja, Portugal
UNINOVA-CTS, Monte de Caparica, Portugal
Keywords: Assessment, Tests, Exams, Lab Exam, Programming, CS1.
Abstract: This paper examines the preferences of students regarding computer-based versus paper-based assessment,
in an introductory computer programming course. Two groups of students were surveyed about their prefer-
ence between paper-based and computer-based tests and respective rationale. All students had already been
assessed: one group using two paper-based tests and the other group using two computer-based tests. Both
groups expressed an overwhelming preference for computer-based tests independently of their previous pro-
gramming experience. We conclude that, from the students’ point of view, computer-based tests should be
the used over paper-based ones for introductory programming courses. This adds to existing literature about
computer-based testing of programming skills.
1 INTRODUCTION
The teaching and assessment of programming skills
is still an important and difficult topic, as demon-
strated by the continuing large number of articles on
the subject, e.g. (Gmez-Albarrn, 2005; Pears et al.,
2007; Bain and Barnes, 2014; Vihavainen et al., 2014;
Chetty and van der Westhuizen, 2015; Silva-Maceda
et al., 2016). Furthermore, the use computer-based
tests has also been the subject of some research, but
typically in the context of learning results, e.g. (Bar-
ros et al., 2003; Bennedsen and Caspersen, 2006;
Lappalainen et al., 2016). Despite the preference
students often demonstrate towards computer-based
tests, anecdotal evidence and some published work
indicates that paper-based tests are still widely used
in introductory computer programming courses e.g.
(Simon et al., 2012; Sheard et al., 2013). This is
probably due to tradition, fraud prevention, and the
additional human and physical resources needed to
properly apply computer-based tests e.g. (Bennedsen
and Caspersen, 2006). This paper presents the results
of a study where two groups of students, in an in-
troductory programming course, were assessed about
their preferences regarding both types of tests and also
the reasons why they prefer one to the other. Both
groups had already completed two tests: one group
completed computer-based tests, the other completed
paper-based tests. The results provide supplemental
help when deciding about or confronted with the need
to choose one type of assessment over the other.
The paper has the following structure: Section 2
presents the course were students were assessed, the
related work and the hypotheses that motivated the
study; Section 3 presents the used methodology and
characterises the participants; Section 4 discusses the
results and Section 5 concludes.
2 BACKGROUND
This section presents the course, the structure of the
tests, the related work, and the research questions that
we set out to answer.
2.1 Course Content and Structure
The course is the first programming course and is part
of two computer science degrees in a small higher
education school. The course uses an objects-early
approach, the JavaTM programming language (Java,
2017), and the BlueJ IDE (BlueJ, 2016; K¨
olling et al.,
2003). First, students learn numeric types, arithmetic
expressions, variables, constants, and the use of math-
ematical functions, by analogy with a scientific calcu-
lator. After they apply conditionals, loops, and vec-
tors to make more complex calculations. Finally, they
Barros, J.
Students’ Perceptions of Paper-Based vs. Computer-Based Testing in an Introductory Programming Course.
DOI: 10.5220/0006794203030308
In Proceedings of the 10th International Conference on Computer Supported Education (CSEDU 2018), pages 303-308
ISBN: 978-989-758-291-2
Copyright c
2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reser ved
303
use graphical objects and recursion. The grading is
based on individual tests and each test can improve
the grade of the previous one, i.e., a better second
grade will replace the first one. The same is done for
each subsequent test. e.g. (Barros et al., 2003; Barros,
2010).
2.2 Tests
The paper-based and the computer-based tests had an
identical structure and content: students had to write
small functions to compute numerical values, or to
write number or text patterns using loops. For the
paper-based tests the grading criteria was extremely
tolerant regarding syntax errors and the students only
had to write the core functions (no need to write the
main method or imports). Even simple output er-
rors were given a small penalty. Regarding computer-
based tests, the students had to submit code without
compilation errors. Code with compilation errors got
a zero mark, just like non-delivered code. The correct
output was the main criteria. Wrong output implied a
strong penalty, even with a near correct logic.
2.3 Related Work
The importance of computer-based assessment in in-
troductory programming is recognised for quite some
time. Daly and Waldron concluded that the computer-
based tests (lab exams) are more accurate assessors of
programming ability than written exams or program-
ming assignments (Daly and Waldron, 2004). Yet,
it is also know that its effective application is more
demanding than paper-based assessment. This is at-
tested by Bennedsen and Caspersen (Bennedsen and
Caspersen, 2006) where students are assessed by a
computer-based test but in small groups, with two
teachers in the room and only for 30 minutes. (Bar-
ros et al., 2003) concluded that computer-based tests
effective at increasing student motivation even over
group assignments. (Lappalainen et al., 2016) have
found that for a specific programming problem, when
students were allowed to use the computer to continue
a paper-based test they were able to correct remain-
ing errors in the respective programs. (Grissom et al.,
2016) found out that students who took a computer-
based exam to write a recursive solution to a binary
tree operation were more successful than those who
took the paper-based exams (58% vs. 17% correct
solutions). Rajala et al. present the adaptation of au-
tomatically assessed electronic exams and note that
computer-based exams have potential benefits for stu-
dents, including, for example, the possibility to com-
pile, test and debug the program code. They recom-
mend computer-based exams for other educators as
well (Rajala et al., 2016).
Next, we present the research questions.
2.4 Research Questions
The research questions were motivated by anecdotal
evidence as students seemed to almost always, with
very few exceptions, prefer computer-based tests.
Also, due to insufficient human and physical re-
sources, we were forced to apply paper-based tests
to one group of students, while the remaining ones,
in the same course, completed computer-based tests.
Hence, we decided to ask both groups of students
about what kind of tests they prefer. Then, with the
intention of exposing students to the perceived advan-
tages of each type of test, they were asked to select
from a list the advantages of each approach. Each
student could also point out additional advantages for
one or both approaches. Hence, the research ques-
tions were the following:
RQ1 Do students prefer computer-based tests over
paper-based tests?
RQ2 What are the perceived advantages students
find in each type of tests?
RQ3 Students’ opinion changes after being con-
fronted with a list of possible advantages of each
type of tests?
The third research question (RQ3) was assessed
by asking the first one (RQ1) before and after students
were asked to point out the perceived advantages of
each type of test.
3 METHOD OF STUDY
The method of study was an anonymous question-
naire. All students were invited to complete it. The
invitation was by a post (delivered by email) on the
course forum. There were two additional reminders
to answer the questionnaire with a three days dead-
line. The students were divided in two groups (A and
B) and the same questionnaire was applied to both:
Group A The students who had completed
computer-based tests;
Group B The students who had completed paper-
based tests.
First, students were asked about their previous
programming experience to allow checking eventual
differences in preferences between them. Then, the
following slider scale was used. An even number of
options was used to force the respondents to choose
CSEDU 2018 - 10th International Conference on Computer Supported Education
304
between paper-based and computer-based testing, but
in a non-binary way:
Question: In what measure do you prefer tests to be
made in paper or computer?
Answer:
I have a very strong
preference for
paper-based tests
1, 2, 3, 4, 5, 6, 7, 8, 9, 10 I have a very strong
preference for
computer-based
tests
This slider scale provided the answer to RQ1. In
the following question, students were asked to select
items that contributed to the rationale for their prefer-
ence, thus providing data for RQ2. From now on we
name that question the ”rationale question”. After be-
ing asked to make this selection (answering the ratio-
nale question), students were asked the same question
with the same slider scale. This provided the data for
RQ3.
4 RESULTS AND DISCUSSION
The populations for Group A and B had a size of 92
and 21, respectively. For each group, the total num-
ber of responses was 35 (38%) and 16 (76%), respec-
tively. We used those as the two samples: Group A
and Group B.
Figure 1 shows the frequencies for the preferences
from 1 (strong preference for paper-based tests) to
10 (strong preference for computer-based tests) be-
fore and after the rationale question. It is very clear
that both groups prefer computer-based tests. Inter-
estingly, the group of students who in fact completed
computer-based tests are even more in favour of that
type of tests.
After the rationale question, it is possible to ob-
serve a slight decrease in the highest preferences for
computer-based tests. Table 1presents the mode and
median for all the students in Group A and Group B,
before and after the rationale question and it makes
more evident that only the students completing paper-
based tests become slightly less critical of those tests
after answering the rationale question. Possibly, this
is due to increased awareness about the perceived
relative disadvantages of computer-based tests that
resulted from the pondering over the advantages of
paper-based tests in the rationale question.
Table 2 and 3 additionally show that this change,
although very weak, is more pronounced in students
with previous programming experience even for the
ones already completing computer-based tests.
Figure 2 shows the percentage of students in each
group that selected each one of the items in the ratio-
nale question. The prefixes ”C” and ”P” identify al-
leged advantages of computer and paper based tests,
respectively. Students could also add other reasons,
but only two students, both from Group B (doing
paper-based tests) used that possibility: one said that
”with paper everything stays in the head”; this stu-
dent was the strongest supporter of paper-based tests
(having answered 1 both times) and had no previous
programming experience; another students, this time
a strong supporter of computer-based tests, added that
”in the computer I can add variables I had forgotten to
add before”.
It is very clear that the advantages of computer-
based tests are much more frequently pointed out.
This is especially relevant as the question was writ-
ten as ”Check what are, in your opinion, the advan-
tages of paper-based tests and computer-based tests”.
Hence, the students are much less willing to recognise
the advantages of paper-based tests. Apparently, the
preference for computer-based tests goes to the point
of demotivating students to select the advantages of
paper-based tests. In fact, even obvious advantages of
paper-based tests like ”P - in the paper there is no risk
of a computer malfunction” were chosen by only 19%
of Group A and 31% of Group B.
Interestingly, the preference for ”copying code
that it is possible to bring” to a computer-based test
(in the context of an open book test) is arguably a dis-
advantage of computer-based tests, as students, espe-
cially weaker ones, tend to just copy paste some code
and then try to solve the problem by trial and error. In
simple problems they can even succeed without really
understanding why or how the program really works.
Finally, it is important to note that the significant
difference in the sample sizes (group A and group B)
and response rates are important limitations of this
study. Besides larger and more similar group sizes, a
more detailed characterisation of student background
would be desirable. Yet, this may imply a non-
anonymous questionnaire.
5 CONCLUSION
The study allowed us to conclude that students in
our sample have an overwhelming preference for
computer-based tests, to the point that they tend to re-
sist recognising the advantages of paper-based tests.
Students also maintain to a great extent their prefer-
ence even after going through a list of advantages of
one type of testing over the other. We believe this
strong preference for computer-based tests has a sig-
nificant effect in students’ motivation. In that sense,
our study reinforces previous ones that pointed out
the learning advantages of computer-based tests e.g.
Students’ Perceptions of Paper-Based vs. Computer-Based Testing in an Introductory Programming Course
305
Figure 1: Percentage of chosen values in the scale 1 (strong preference for paper-based tests) to 10 (strong preference for
computer-based tests).
Table 1: Expressed preferences for paper versus computer based tests, before and after choosing relative advantages.
Table 2: Expressed preferences for paper versus computer based tests, before and after choosing relative advantages by
students with previous programming experience.
Table 3: Expressed preferences for paper versus computer based tests, before and after choosing relative advantages by
students without previous programming experience.
CSEDU 2018 - 10th International Conference on Computer Supported Education
306
Figure 2: Percentage chosen for each item in the rationale question.
(Daly and Waldron, 2004; Bennedsen and Caspersen,
2006).
As future work, we intend to ask third year stu-
dents and also older students already in the work-
place about their preferences regarding paper-based
vs. computer-based tests. Also, it would be interest-
ing to search for eventual correlations between perfor-
mance in the tests and test style preferences. Finally,
an interesting alternative approach would be to have
all students experienced both test styles, in different
order, before asking the preference.
REFERENCES
Bain, G. and Barnes, I. (2014). Why is programming so
hard to learn? In Proceedings of the 2014 Conference
on Innovation & Technology in Computer Science
Education, ITiCSE ’14, pages 356–356, New York,
NY, USA. ACM.
Barros, J. (2010). Assessment and grading for CS1: To-
wards a complete toolbox of criteria and techniques.
In Proceedings of the 10th Koli Calling International
Conference on Computing Education Research, Koli
Calling ’10, pages 106–111, New York, NY, USA.
ACM.
Barros, J. a. P., Estevens, L., Dias, R., Pais, R., and Soeiro,
E. (2003). Using lab exams to ensure programming
practice in an introductory programming course. In
Proceedings of the 8th Annual Conference on Inno-
vation and Technology in Computer Science Educa-
tion, ITiCSE ’03, pages 16–20, New York, NY, USA.
ACM.
Bennedsen, J. and Caspersen, M. E. (2006). Assessing pro-
cess and product – a practical lab exam for an intro-
ductory programming course. In Proceedings. Fron-
tiers in Education. 36th Annual Conference, pages
16–21.
BlueJ (2016). BlueJ homepage. http://www.BlueJ.org. Ac-
cessed on 2017/02/09.
Chetty, J. and van der Westhuizen, D. (2015). Towards a
pedagogical design for teaching novice programmers:
Design-based research as an empirical determinant for
success. In Proceedings of the 15th Koli Calling Con-
ference on Computing Education Research, Koli Call-
ing ’15, pages 5–12, New York, NY, USA. ACM.
Daly, C. and Waldron, J. (2004). Assessing the assessment
of programming ability. In Proceedings of the 35th
SIGCSE Technical Symposium on Computer Science
Education, SIGCSE ’04, pages 210–213, New York,
NY, USA. ACM.
Gmez-Albarrn, M. (2005). The teaching and learning of
programming: A survey of supporting software tools.
The Computer Journal, 48(2):130–144.
Students’ Perceptions of Paper-Based vs. Computer-Based Testing in an Introductory Programming Course
307
Grissom, S., Murphy, L., McCauley, R., and Fitzgerald, S.
(2016). Paper vs. computer-based exams: A study of
errors in recursive binary tree algorithms. In Proceed-
ings of the 47th ACM Technical Symposium on Com-
puting Science Education, SIGCSE ’16, pages 6–11,
New York, NY, USA. ACM.
Java (2017). JavaTM Programming Language.
https://docs.oracle.com/javase/8/docs/technotes/
guides/language/index.html. Accessed on
2017/12/06.
K¨
olling, M., Quig, B., Patterson, A., and Rosenberg, J.
(2003). The BlueJ system and its pedagogy. Com-
puter Science Education, 13(4):249–268.
Lappalainen, V., Lakanen, A.-J., and H ¨
ogmander, H.
(2016). Paper-based vs computer-based exams in
CS1. In Proceedings of the 16th Koli Calling In-
ternational Conference on Computing Education Re-
search, Koli Calling ’16, pages 172–173, New York,
NY, USA. ACM.
Pears, A., Seidman, S., Malmi, L., Mannila, L., Adams, E.,
Bennedsen, J., Devlin, M., and Paterson, J. (2007).
A survey of literature on the teaching of introductory
programming. In Working Group Reports on ITiCSE
on Innovation and Technology in Computer Science
Education, ITiCSE-WGR ’07, pages 204–223, New
York, NY, USA. ACM.
Rajala, T., Kaila, E., Lind´
en, R., Kurvinen, E., Lokkila,
E., Laakso, M.-J., and Salakoski, T. (2016). Au-
tomatically assessed electronic exams in program-
ming courses. In Proceedings of the Australasian
Computer Science Week Multiconference, ACSW ’16,
pages 11:1–11:8, New York, NY, USA. ACM.
Sheard, J., Simon, Carbone, A., Chinn, D., Clear, T.,
Corney, M., D’Souza, D., Fenwick, J., Harland, J.,
Laakso, M.-J., and Teague, D. (2013). How difficult
are exams?: A framework for assessing the complex-
ity of introductory programming exams. In Proceed-
ings of the Fifteenth Australasian Computing Educa-
tion Conference - Volume 136, ACE ’13, pages 145–
154, Darlinghurst, Australia, Australia. Australian
Computer Society, Inc.
Silva-Maceda, G., David Arjona-Villicana, P., and Edgar
Castillo-Barrera, F. (2016). More time or better tools?
a large-scale retrospective comparison of pedagogical
approaches to teach programming. IEEE Trans. on
Educ., 59(4):274–281.
Simon, Chinn, D., de Raadt, M., Philpott, A., Sheard, J.,
Laakso, M.-J., D’Souza, D., Skene, J., Carbone, A.,
Clear, T., Lister, R., and Warburton, G. (2012). In-
troductory programming: Examining the exams. In
Proceedings of the Fourteenth Australasian Comput-
ing Education Conference - Volume 123, ACE ’12,
pages 61–70, Darlinghurst, Australia, Australia. Aus-
tralian Computer Society, Inc.
Vihavainen, A., Airaksinen, J., and Watson, C. (2014). A
systematic review of approaches for teaching intro-
ductory programming and their influence on success.
In Proceedings of the Tenth Annual Conference on
International Computing Education Research, ICER
’14, pages 19–26, New York, NY, USA. ACM.
CSEDU 2018 - 10th International Conference on Computer Supported Education
308
... Online classes frequently rely on computerbased exams, including the recent shift to online teaching due to Covid-19. CS students often prefer computerized-assessments over paper-assessments mainly for the potential flexibility to choose an exam time [54], the ease of typing faster to edit code (compared to pen and paper) [41], as well as the auto-correction of syntax, unit tests, and debuggers from Integrated Development Environments (IDEs) [1,53]. ...
... This paper reports on the analysis of 13 think-aloud sessions. 1 The participants were undergraduate students recruited from a Pythonlanguage introductory programming course for non-computer science majors at a large public U.S. university. The participants included nine males and four females and were traditional aged undergraduate students (e.g., age [18][19][20][21][22][23]. ...
... Participant 11 demonstrates this process on the problem shown in Figure 9. From their verbalizations, one can see that they're piecing together what the code is doing bit by bit and then reasoning about what must be true about the input in order to achieve the Input a value for arg_num such that rval is 2. [1,2,6,9,10,14,16,17], arg_num) Figure 10: A reverse-tracing question that many participants struggled with because of reversed(range(len(li))). ...
Conference Paper
Full-text available
In this paper, we perform a comparative analysis using a within-subjects ‘think-aloud’ protocol of introductory programming students solving tracing problems in both paper-based and computer-based formats. We demonstrate that, on computer-based exams with compiler/interpreter access, students can achieve significantly higher scores on tracing problems than they do on similar paper-based questions, through brute-force execution of the provided code. Furthermore, we characterize the students’ usage of machine execution as they solve computer-based tracing problems. We, then, suggest “reverse-tracing” questions, where a block of code is provided and students must identify an input that will produce a specified output, as a potential alternative means of assessing the same skill as tracing questions on such computerbased exams. Our initial investigation suggests correctly-designed reverse-tracing problems on computer-based exams more closely track a student’s performance on similar questions in a paper-based format. In addition, we find that the thought process while solving tracing and reverse-tracing problems is similar, but not identical.
... Eight publications were articles, and sixteen were conference papers. The most common outlets for conference papers were the ACM Special Interest Group on Computer Science Education (SIGCSE) Conference (4 papers) [35,39,40,41], and the International Conference on Computer Supported Education (CSEDU) Conference (3 papers) [42,43,44]. All of the eight articles were published in different journals. ...
... This means that it can be difficult to readily compare the results from studies which use different evaluation methods. Some studies reported preferences using general observations of students [46], category or binary responses [48,51,53,44], a scale such as 5-point Likert scale or rating between 1 and 10 [42,37,54], providing a written response [58], some reported qualitative data [47]. One study also reported polling students, but the details of how this was done was not clear [41]. ...
Conference Paper
Full-text available
In the past two decades, there has been a considerable increase in use of computer-based exams. Currently there is a lack of comprehensive understanding about equivalence between paper-based and computer-based exams in engineering and computing education. This study utilised a systematic literature review (using PRISMA) to synthesise the literature regarding paper and computer-based testing in engineering and computing education between 2010-2020 (inclusive). The research database that was used was Scopus. Publications were first screened by title, abstract and keywords. This was followed by screening the full text of each publication. Twenty-four papers were included in the final analysis. Outcomes demonstrated no clear result regarding equivalency between computer-based and paper-based exams. Similarly, many studies reported that students preferred computer-based exams, but this was not ubiquitous. Educators should consider these factors when considering transitioning from paper to computer-based testing.
... Several comparability studies examining the transfer of tests from paper to web-based environment have been conducted. Such studies have been focused rather on the advantages and disadvantages (Clariana and Wallace, 2002), (FutureEd, 2018), (Albanna and Abu-Safe, 2019), (Miller, 2019) or on the performance (Al-Amri, 2008), (Özalp-Yaman and Cagiltay, 2010), (Candrlic et al., 2014), (Barros, 2018), (AbdulRasheed and Iyere, n.d.) than on the students' preferences. This study focuses primarily on the perspective of students. ...
... On the other hand, Miettunen (2018) claims, based on the students' feedback, that students prefer web-based exams, and web-based exams prove to be a step towards a student-centred and constructivist learning. Barros (2018) also indicates students' overwhelming preference for web-based tests compared to paper-based tests. Candrlic et al. (2014), based on students' comments received during the study, concluded that students prefer to use the keyboard rather than a pencil so they would rather take web-based tests than paper-based tests. ...
Conference Paper
Full-text available
Using a web-based environment for assessment can provide several advantages and disadvantages for both academics and students. The purpose of this study is to examine university students' testing preferences in mathematics comparing their views on web-based and paper-based assessment. Two groups of engineering students, daytime and distance learners, were studied about their preferences and rationale. Based on the survey, statistical hypotheses were formulated and validated using chi-squared test statistics for categorical data. The results of this study showed that the vast majority of respondents thought that an opportunity to take web-based assessment tests in math was a good innovation. However, there were no statistically significant differences between the percentages of students who preferred to take assessment tests in math on computer, or on paper, or it did not matter to them. Those students who preferred to take web-based assessment tests were generally in favour of the replacement of paper-based classroom assessment tests with web-based tests, and vice versa. Those students who did not care whether to take paper-based or web-based assessment tests were in favour of partial replacement. The students' willingness to replace web-based homework with paper-based tasks only slightly depended on their preferences. Qualitative content analysis was used to analyse open questions. The research has shown that the reasons for the learners' preferences were mostly related to convenience and flexibility, environmental sustainability, complexity and academic fraud.
... 59% of the students at the University of Jordan and 50% of the students at Zayed University in the United Arab Emirates liked online exams better than PP exams. Barros (2018) confirmed these findings; that is, students unequivocally preferred CB tests over PP tests. ...
Article
This paper presents developmental trends in technology-based assessment in an educational context and highlights how technology-based assessment has reshaped the purpose of educational assessment and the way we think about it. Developments in technology-based assessment stretch back three decades. Around the turn of the millennium, studies centred on computer-based and paper-and-pencil test comparability to ascertain the effect of delivery medium on students' test achievement. A systematic review of media studies was conducted to detect these effects; the results were varied. Recent work has focused on logfile analysis, educational data mining and learning analytics. Developments in IT have made it possible to design different assessments, thus boosting the number of ways students can demonstrate their skills and abilities. Parallel to these advances, the focus of technology-based assessment has shifted from an individual and summative approach to one which is cooperative, diagnostic and more learning-centred to implement efficient testing for personalised learning.
... Students' level of use of the OFSA quizzes was considerably high-93% of students having taken at least one or more OFSA quizzes-taking into account that it was a voluntary activity, was to be done outside of lecture time, and had no additional incentive for their grade. This is significantly higher than what Kibble observed, with only a 52% participation rate with two formative assessment quizzes under the same condition that no course credit or incentive was given for taking the OFSA quizzes (Kibble, that an online learning setting can offer including easy access and the convenience of not being restricted to time and space (Barros, 2018;Demirci, 2007;Gikandi et al., 2011;Wang, Wang, Wang, & Huang, 2006). Email reminders about new OFSA quizzes at the beginning and end of each week may have contributed to the high participation rate. ...
Article
Full-text available
The main objectives of this study are: 1. Evaluate the degree of utilization of online, formative self-assessment (OFSA); 2. To evaluate the effect of OFSA on summative final exam (SFE) scores. The design of the study involved students having the opportunity to take a total of eight weekly OFSA quizzes voluntarily, outside of class time and throughout the academic term. Demographic, utilization and SFE scores were collected and analyzed. The results included: 1. high participation rate with 93% (N = 173) of the total number of students having taken at least one or more quizzes and 53% (N = 98) of students took at least four or more OFSA quizzes. 2. There was a 0.72 (p=.008; CI: .196 to 1.253) increase of SFE scores per quiz taken as per linear regression. The correlation was mildly, positive (r = .194, p < .01). In post hoc analysis, the mean SFE score of the frequent (4 or more quizzes) OFSA takers was 3.52 higher than that of the infrequent (3 or fewer quizzes) takers (p < .01). Based on the results, OFSA may offer a complementary learning tool for students in a Chiropractic program.
Article
As Information and Communication Technology (ICT) literacy education has recently shifted to fostering computing thinking ability as well as ICT use, many countries are conducting research on national curriculum and evaluation. In this study, we measured Korean students’ ICT literacy levels by using the national measurement tool that assesses abilities of the IT (Information Technology) area and the CT (Computational Thinking) area. A research team revised an existing ICT literacy assessment tool for the IT test and developed a new CT test environment in which students could perform actual coding through a web-based programming tool such as Scratch. Additionally, after assessing ICT literacy levels, differences in ICT literacy levels by gender and grade were analyzed to provide evidence for national education policies. Approximately 23,000 elementary and middle school students participated in the 2018 national assessment of ICT literacy, accounting for 1% of the national population of students. The findings demonstrated that female students had higher literacy levels in most sub-factors of IT and CT areas. Additionally, in the areas of strengths and weaknesses, the ratio of below-basic achievement among male students was at least two times greater than that of female students. Nonetheless, male students scored higher on CT automation, a coding item that involved problem solving using Scratch. Looking at the difference according to grade level, the level improved as the school year increased in elementary school, but there was no difference in middle school. When analyzing the detailed elements of middle school students, the automation factor of seventh grade students was found to be higher than eighth and ninth grade students. Based on these results, this study discussed some implications for ICT and computing education in elementary and middle schools.
Article
Full-text available
Technology today has come a long way in changing different aspect of lives and helping in human efficiency and accuracy. There is a growing need for educators and stakeholders to explore other means of assessment of students using different medium to help students. The present investigation aims at studying the problems faced by Bachelor of Education students during computer based online entrance test. The sample of the study consisted of 360 pupil-teachers from Bachelor of Education students of Odisha. The investigator selected eight institutions. Six Secondary Teacher Training Institutions and three DIETs are selected purposively for the total number of institutions of Odisha providing B.Ed. course approved by TE & SCERT, Odisha. Fourty pupil teachers from each institutions are selected.The simple random sampling technique was followed for selection of students from each institution.The major findings of the study revealed (i) There were clear instructional symbols for regulating the online examination. (ii)The quality and conditions of the systems were good and adequate to use. (iii) The systems were functioning properly during examination. (iv) There were some problems in supply of electricity during online examination.(v) There were problems of drinking water facilities.(vi) Provision of new computer system, appointment of well trained invigilators, adequate no. of MOCK questions, uninterrupted electricity supply and removal of noisy atmosphere from test site are suggested by the pupil teacher for improvement of online entrance test.(vii) There were problems of medical facilities.(viii) There were lacks of ladies toilets.(viv) There were problems for physical disabled.
Article
Full-text available
Learning to program is a complex task, and the impact of different pedagogical approaches to teach this skill has been hard to measure. This study examined the performance data of seven cohorts of students (N = 1168) learning programming under three different pedagogical approaches. These pedagogical approaches varied either in the length of the introductory programming block of courses (two or three semesters) or in the programming tool used in the first semester (C language or the programming support tool Raptor). In addition, gender and initial course selection differences were investigated. Raw pass rates under the three pedagogical approaches were compared; they were also compared, controlling for initial ability levels, using a logistic regression. Results showed that a more extensive duration of the introductory block produced a higher pass rate in students, but changing the programming tool used did not. Raw gender differences were not statistically significant; admission phase differences were initially statistically different, but not once initial ability and pedagogical approach received had been accounted for. Findings are discussed in relation to existing literature.
Conference Paper
Full-text available
We present preliminary results of an ongoing study into the barriers to student learning in programming. Unlike similar studies, we look at students who have not previously been high achievers in the education system. The key barriers identified so far are poor problem solving strategies and emotional issues caused by previous educational experiences.
Conference Paper
Full-text available
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses. .
Conference Paper
Full-text available
The final assessment of a course must reflect its goals, and contents. An important goal of our introductory programming course is that the students learn a systematic approach for the development of computer programs. Having the programming process as learning objective naturally raises the question how to include this in assessments. Traditional assessments (e.g. oral, written, or multiple choice) are unsuitable to test the programming process. We describe and evaluate a practical lab examination that assesses the students' programming process as well as the developed programs. The evaluation is performed in two ways: By analyzing the results of two lab examinations (with more than 500 students) and by semi-structured individual interviews with representatives of the involved persons (students, TAs, lecturer, and examiner). The result of the evaluation is encouraging and indicates the value of alignment and strong conformity between goal, content and assessment of the introductory programming course
Conference Paper
Educational technology is nowadays utilized frequently in programming courses. Still, the final exams are mostly done using "traditional" pen-and-paper approach. In this paper, we present the adaptation of automatically assessed electronic exams in two programming courses. The first course was an introductory programming course taught using Java and the second one an advanced course about object-oriented programming. The usage of electronic exams offers several potential benefits for students, including, for example, the possibility to compile, test and debug the program code. To study the adaptation of electronic exams, we observed two instances of the courses mentioned above. Individual scores, submission counts and time spent on each task were analyzed. This data enabled us to classify the exercises in exams according to their difficulty level. This information can be used to further design exams to measure students' knowledge and skills adequately. The analyzed data and the student feedback seem to confirm that electronic exams are an excellent tool for evaluating students in programming courses, and can be recommended to other educators as well.
Conference Paper
This paper reports on a study of goal-plans and errors produced by students who wrote recursive solutions for a binary tree operation. This work extends a previous study of difficulties CS2 students experienced while writing solutions on paper-based exams. In this study, participants solved the same recursive binary tree problem as part of a hands-on computer-based exam where students had access to an IDE and Java API documentation. Not surprisingly, students who took the computer-based exams were more successful than those who took the paper-based exams (58% vs. 17% correct solutions). However, even with the advantage of access to an IDE, documentation, and test cases, 42% of students taking the computer-based exam still made errors, indicating that students exhibit persistent errors even with support. The most common errors observed included incorrect calculations, missing method calls and missing and incorrect base cases.
Conference Paper
In this study, we examine the "test mode effect" in CS1 exam using the Rainfall problem. The participants started working with pen and paper, after which they had access to a computer, and they could rework their solution with a help of a test suite developed by the authors. In the computer- based phase many students were able to fix the errors that they had committed during the paper-based phase. These errors included well-known corner cases, such as empty array or division by zero.
Conference Paper
Research and experience indicates that students enrolled for CS1 often perform poorly. On this basis we propose a pedagogical design for CS1 within the context of South Africa, where students are often under-prepared for the challenges of higher education. The pedagogy was designed and implemented using a design-based research (DBR) approach over a period of two iterative cycles. The outcome of the design research is a set of eight design principles that have been implemented within the local context. Although further research is needed to test the design more rigorously, results to date are promising. Overwhelmingly, students perceived the design to be beneficial to their learning. Further, success rates in the course improved dramatically.
Conference Paper
The low success and retention rates in CS1 are well-known problems. This paper discusses the importance of assessment and grading strategies when dealing with those problems, as both are seen as fundamentally important for improving student motivation and learning along the whole course. More specifically, the paper presents a set of criteria and a set of techniques that should be considered and balanced when defining the assessment and grading methodology. Both sets were identified based on the literature and on the author's experience teaching several introductory programming courses, with traditional high attrition and failure rates, in a small non-elite school. Yet, probably they are applicable and useful for defining the assessment and grading schemes for any CS1 course or even other subject matters. The paper also aims to provide a basis for the discussion of the right balance between student learning, fraud minimization, and available resources. The paper ends with a proposal for a concrete methodology for assessment and grading that has been refined over the last three years.
Conference Paper
A recent study [7] has shown that many computing students are not able to develop straightforward programs after the introductory programming sequence. Normal student assessment should have highlighted this problem, it did not, therefore normal assessment of programming ability does not work.We examine why current assessment methods (written exams and programming assignments) are faulty. We investigate another method of assessment (the lab exam) and show that this form of assessment is more accurate.We explain why accurate assessment is essential in order to encourage students to develop programming ability.