ChapterPDF Available

DEVELOPMENT AND APPLICATION OF S-GALL: AN ONLINE EXAMINATION SYSTEM FOR HIGHER EDUCATION IN TURKEY

Authors:

Abstract and Figures

Assessment is critical step for learning and education, especially when institutions of highereducation ask lecturer more accurate and more informative assessments to measure learning outcomes. The traditional assessment methods to measure student performance in higher education includes exercises assignments, homeworks or tests and midterm or final exam. This assessment methods generally consists of multiple-choice, gap-filling, and open-ended questions. Exams can be used to assess students' performances and repeat and reinforce the learned content. Those positive outcomes can be further improved by using an online testing system that includes an effective feedback module using today’s technologies. The study aimed to develop a convenient, secure, easy-to-use and effective web-based online examination system (S-GALL) that would provide students and lecturers with various opportunities to design, apply, and assess exams easily. In this sense, the S-GALL system was explained through the system structure and functions sections, and each module was introduced with sample figures. Additionally, the system was updated, considering the students' feedback obtained through user-based designs. The study is thought to contribute to online assessment literature, to become a sample for decision-makers and designers, and to provide a practical alternative in the field. Future studies can separately measure system functionality, student/lecturer views and attitudes, and the effects of interface differences on success performance.
Content may be subject to copyright.
95
CHAPTER 5
DEVELOPMENT AND APPLICATION OF S-GALL: AN
ONLINE EXAMINATION SYSTEM FOR HIGHER
EDUCATION IN TURKEY
Assist. Prof. Kürşat ARSLAN
1
,
Assoc. Prof Adnan SEMENDEROĞLU
2
1
Computer Education and Instructional Technology, Buca Faculty of Education, Dokuz Eylül
University, İzmir, Turkey, kursat.arslan@deu.edu.tr. https://orcid.org/0000-0003-4680-9561
2
. Geography Education Department, Dokuz Eylül University, İzmir, Turkey,
a.semenderoglu@deu.edu.tr. https://orcid.org/0000-0002-6039-2750
96
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
97
INTRODUCTION
Assessment is an essential and critical step of education (Brown, Bull
& Pendlebury, 1997) and is used to determine students' current academic
performance and the points that should be reinforced in teaching (Ghilay,
2017; Baki & Birgin, 2002). According to the Turkish Ministry of National
Education reports, there are two types of assessment in higher education:
traditional and alternative. Alternative assessment methods include student-
centered techniques such as performance assignments, projects, checklists,
self-assessment, peer assessment, group assessment, drama, role-play, word
association, and concept maps (MEB, 2018). Traditional assessment methods
involve exercises assignments, homework, tests or midterm or final exam
including question types of gap-filling, multiple-choice, true-false, short-
answer, matching, essay, selection of missing words, etc. With the
introduction of technology into personal and educational life today, online
tests have become widespread apart from paper-and-pencil exams. When a
computer is used to show, record, monitor, and answer the test items, in other
words, an exam (traditional or alternative) is administered in a computer-
based environment, and it is called a "web-based, computer-assisted, or online
exam/test" (Karakaya, 2001).
Online exams are technically considered an integral part of distance
education (although courses are online, assessment is done using paper-and-
pencil exams at certain distance education programs); they are preferred as
assessment tools by a few universities and lecturers (Bull, 2001; Ünsal, 2010).
The number of studies on online testing is quite limited. In a content analysis
study by Arslan and Yetgin (2020), it was found that there are only 30
publications on online assessment methods in Turkey since 2000. It was also
observed that most of those papers dealt with teaching procedures or lecturers'
performance instead of student academic success and performance. Although
online assessment still has various limitations related to computer and internet
access, reliability, cheating, and control (Sheader at al., 2006; Yağcı, Ekiz, &
Gelbal, 2015), it can be an excellent alternative to paper-pencil tests due to its
advantages such as providing a rich item pool, the easy mix of items and
options, immediate answers and feedback, automatic scoring, using images,
audio, and video, and practical management of time and cost (Ghilay &
Ghilay 2012; Bull & McKenna, 2004; Conole &Warburton 2005). It also has
98
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
great potentials for evaluating open-ended questions with artificial
intelligence and designing private exams and exercises for every student
(Bull, 1999; Thelwall 2000; Akın, 2007; Özturan, 2017). With the increase in
internet bandwidth, computer-based exam or assessment help instructors to
report near real time scores, to give instantaneous personalized feedbacks, to
ensure independence of time and space, and to collect efficient data for
enhancing learning and analysing quantitative data related to performance
(Thelwall 2000).
On the other hand, a functional online testing system can be costly
(Karakaya, 2001). Although most applications offer free services, the
commonly used and necessary application properties for teachers and students
generally have an annual or monthly price (Ozan & Özarslan, 2010). Several
open-source learning management systems are entirely free to use and
download and offer many online testing features, but it is often impossible to
make changes in such systems (Ozan & Özarslan, 2010).
This study aimed to develop a new online testing system to make it
widespread in university education, assist faculty members with the
assessment procedures (creating, applying and evaluating exam; sharing result
with students; collecting data for analysis), and minimize human-made errors.
This paper includes the design, development, and application steps of the
given system.
Theoretical Framework
The measurement and assessment methods frequently applied at
universities in Turkey are summative methods, such as final exams, essay,
project or term papers, taking place at the middle or end of the course or
semester (Çakan, 2017). Exams generally consist of multiple-choice questions
or sometimes include a combination of true-or-false, multiple-choice, open-
ended, and fill-in-the-blank questions. The frequent use of multiple-choice
questions can also become an advantage (Sheader and at. al., 2016).
In 1950, Pressey witnessed the increasing use of tests in schools and
underlined that multiple-choice tests could assess achievements and reinforce
learning (Pressey, 1950). Pressey developed a "machine for automatic
teaching" in the 1920s. The basic principle of the machine was to provide
instant feedback to the student and automatic scoring. Although Pressey used
99
printer-like devices that were slow and difficult to manage, as Edward
Thorndike (Thorndike, 1927) emphasized, high-tech computers and online
testing systems provide useful feedback to support learning today and are
practical for both students and faculty members. Feedback is one of the
important interaction tools that can be used between teacher and student to
create an effective and productive learning environment. At the same time, it
is possible to improve and strengthen students' learning, and to enable
teachers to identify shortcomings of the teaching/learning process, through
feedback.
Through the recent development in educational technology, in
addition to the written forms of the feedback, how the feedback can be given
in different modes and their effects on the students have been investigated.
Most of the research is on the written form of feedback, e-feedback, audio
feedback, and even video feedback (Chong, 2019). In this system, written
mode of the feedback is used. This "teacher e-feedback" mode can be defined
as a feedback system used to provide students with synchronous or a-
synchronous personal, immediate and useful feedback for each question, as
well as to the whole exam result if necessary. Feedback for each question can
be prepared in the same way for each student before the exam, or they can be
created by teacher in personal feedback after the exam in accordance with
students’ answer to the open-ended question. The system also allows the
student to respond to teacher feedback. In this way, the effectiveness of
feedback and permanent learning can be achieved.
SYSTEM DESIGN
Use as many sections and subsections as you need (e.g. Introduction,
Methodology, Results, Conclusions, etc.) and end the paper with the list of
references.
Main Principals of S-GALL design
The system was developed upon the principles listed below to
demonstrate that the given system shows the items and records student
answers, and operates based on various teaching theories.
Providing immediate feedback
Immediate and appropriate feedback is considered an essential
100
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
component of permanent learning. According to Black and Wiliam (1998),
innovations in assessments designed to reinforce students' frequent feedback
on their learning provide significant learning outcomes. The forms of
feedback that can be most useful in evaluation are commenting on the good
and weak aspects of the answer and explaining how it can be improved (Black
and Wiliam, 1998). In this sense, tests can be used to check the answers and
give students feedback at the end of the exam. Since S-GALL offers students
the opportunity to see the test results and evaluate the items immediately, it
contributes to learning and provides an online testing module.
The S-GALL system provides the following types of feedback:
For multiple, true-false, and fill-in-the blanks question, the
feedback gives the correct answers along with the reasons and, when
necessary, explains why the student's choice was not correct. The feedback
displayed can be adapted to the student's preferences. For example, the
student may receive more detailed or summary feedback.
Feedback for open-ended questions is designed to hold
enough information to allow users to evaluate their own answers. This can be
a modal answer or the student can evaluate the answer with the help of some
questions. For example, "Which of the following points does your answer
contain?". The system also allows the sharing of articles, videos or books
from external sources.
Providing both paper-and-pencil and online testing
The proposed system has a feature that is not found in any online
evaluation system. If it is not possible to perform an online test, the system
allows printing the test's paper-and-pencil form. This feature is created for the
university faculty members who teach face-to-face but conduct their exams
online. For example, in some situations, such as internet connection problem,
lack of enough computer in lab, ability to use technology, or student attitude
towards online test, converting the online test into paper-pencil mode test can
be a savior.
Designing an-easy-to-use system
An online testing system's practicality is critical for possible effects
on decision-making and achievement of the specified goals (Karahoca et al.,
101
2015). Thus, a simple user interface was used in the S -GALL, and a mobile-
compatible, easy-to-use, functional, and distraction-free design was preferred
for the system.
System Structure
A browser/server-based online testing system was developed using
modern computer technologies, and it was called "S-GALL." Based on
DCOM technology, the system has four main modules (i.e., exam preparation,
web-based testing, automatic scoring, and feedback) and three layers (i.e.,
database, server, and client). The system layers will be introduced, and then
the modules will be summarized in the following parts.
Figure 1: The General Structure of S-GALL
Database
Microsoft SQL Server, a high-speed and robust relational data

1. Layer
 
 


 











 

  
 

102
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
management system, was used for database management of the S-
GALL..NET technology is commonly used in server modules. Microsoft SQL
Server is preferred since it is compatible with C# and ASP and offers an
interface similar to .NET technology. The database management platform
provides extensive data storage and editing capacity and high speed for test
registration, editing, and scoring procedures. The database software also
offers advanced features and security for large-scale projects (Microsoft,
2010). Registration and queries operate using ASP pages (Active Server
Pages) and SQL language. JavaScript is used to control and edit entries within
client-side pages, facilitating simple operations before the server. Besides,
HTML, CSS, and CSS3 languages are used to format ASP pages. In case of
data loss, error, or hacking in the system, it can be manually backed up to a
different computer.
Server
Internet Information Server 7 (IIS) was used to benefit .NET
technology and publish the system on the internet. The system is also
compatible with ASP and Microsoft SQL Server. An essential advantage of
IIS includes comprehensive user statistics and a graphical user interface
(Delaney, 2000).
Client
The client is the last part of the system and responds to user requests
with the cooperation of the database management system and server. Users
transmit their requests to the server through various internet browsers, and the
server interprets the request and transmits it as an HTML file to the client. S-
GALL was tested on all current browsers (e.g., Microsoft Explorer, Edge,
Chrome, Firefox, Safari) and proved to operate without any problems,
including mobile applications.
Security
The online testing system can be activated on the internet or in the
local network without an internet connection. Since there is no internet
connection in the local network, security problems can be solved quickly. On
the contrary, there can be severe problems in an online test, such as data
transmission security, access security, data security, and certification
103
problems. Several methods can be used to solve the mentioned problems. The
following security policies were determined for the S-GALL. First one is data
transmission security. An SSL certification system was used to ensure
transmission security. Hence, backdoor access or changes on the items,
answers, and other materials were eliminated. Data is sent encrypted between
the server and the client in the SSL certificate server using the HTTPS
protocol (Delaney, 2000). Thus, data transfer and security are ensured within
the system. Second one is password Security. The SQL injection prevention
method was used primarily for software problems. SQL injection is the data
hijack through malicious SQL statements in data-driven applications.
Therefore, data are scanned at every entry to prevent such harmful
expressions. Additionally, user passwords are stored in the database encrypted
with the MD5 algorithm. Thus, even if the database is hacked, user passwords
can never be stolen. S-GALL has a login system combined with hardware
authentication for password security and prevents illegal access to the system.
When a student starts the test, the system automatically generates a password,
thereby preventing a user's access with a similar username and password.
Third one is user Authentication. The system has web-based face recognition
and verification technology for out-of-classroom testing that entails advanced
user security. Administrator approval is required to use the recognition and
verification system. The system recognizes a student's face via a computer
camera and automatically continues to monitor. If the student leaves the room,
s/he is automatically considered to have completed the test. This system is
based on a free JavaScript application (face-api.js) running through a browser.
Although the system does not offer a completely secure user certification, it
can be used for user authentication.
Functions of S-GALL
Question Adding and Editing Module
A question can be created in different types in the system. The most
widely used question formats in exams today are multiple choice, open-ended,
gap-filling, true-false or matching. the system supports all of these question
types (Conole &Warburton, 2015; Çakan, 2017). Before the question text and
options, the user must select the lesson and subject (Figure 2). After this
process, the user can easily specify the question type. For each question, the
104
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
difficulty level of the question, the answer and supporting information
(feedback area) must be included, so that at the end of the exam, students can
see the correct answer while checking their answers and if they do wrong,
they can benefit from supporting information.
One of the most important benefits of the system to the user is that
image, animation, sound or video can be used in the question, feedback, and
answer. While the images can be added directly to the question, sound,
animation and videos can be embedded in the question with html codes.
Figure 2: Multiple Choice Question Creating Form
Test Preparation Module
A test can be prepared in two ways in the system. First of all, the
administrator/lecturer must click the target course and see the course subjects'
list for the test. The first method involves automatically selecting the
previously added items from an item pool by the lecturer. The lecturer
determines the number of the item in each subject and difficulty level of the
exam in the test (Figure 3). The system randomly chooses items from the item
pool, considering the lecturer's preference. For example, when the user
   
   
  
  

 
 
 
 
 
 
   
105
requests one item from every subject, the system randomly selects one item
and marks the selected item number in bold and underlined (Figure 3). If the
user moves the mouse over the question numbers, the question will appear on
the screen. Although the system automatically selects the items, the user can
remove them by clicking on them or selecting another item manually.
Figure 3: Test Preparation Screen
In the second method, the administrator creates a test by manually
selecting the items in each subject. A lecturer can also pick the items
separately or use them in a combination employing the previous method.
 
 
   
   
      
 

 

     
  
      

     
 
 

 
    

106
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
The system also allows the lecturer to remove items from the item
pool while preparing a test. For example, multiple-choice, open-ended, gap-
filling, matching questions, or items belonging to a previous test can be
removed from question pool for a new test. The lecturer can remove one or
more of them simultaneously. It prevents the repetition of the same items in
the second test in a semester.
After selecting the items and difficulty level, the administrator can
label and save the test. Then, s/he is directed to a page, including the testing
settings. Here, the lecturer determines all features of the test (Figure 4).
Figure 4: Changing Testing Properties
Test Code and Title
The system generates a code for each test similar to
"9f376c21c2ffc55bcdf195922890fbce" consisting of 32 characters. This
unique code provides access to all files and data related to the test in the
database for authorized persons. This code appears only in the browser
address bar. As shown in Figure 4 (Number 1), the lecturer can also make a
title for the test and add the necessary explanations. S/he can format the text
as s/he wishes.

 
   
  
    
          
  
  
   
      
       
  
     
  
  
107
Testing Time and Duration
The lecturer writes the time of the test in the field number 2 in Figure
4. The test should be started synchronized with the server time. Thus, the user
is given a particular time and duration (Number 3). The server sets the test
time; thus, possible changes in the client machine during the test does not
influence the procedure.
Test Type and Percentages
The administrator can design a multiple-choice test or add different
question types. In other words, the test can include both multiple-choice and
open-ended questions. A percentage must be specified for such tests (Figure 4
- Number 9). For example, open-ended questions constitute 40% of the test,
and the rest (60%) can include multiple-choice questions. The system
automatically scores the test.
Item Presentation
One of the most important advantages of online tests is the delivery of
test items in a different order for every student. Besides, if required, the
system also allows for mixing the options. It is effortless to set in the system.
If the lecturer activates the options "mix items" (Figure 4 - Number 6) and
"mix options" (Figure 4 - Number 7), both items and options will be presented
in a different order for every student. However, if required, students can be
allowed to see only one item at a time or all the items simultaneously (Figure
4 - Number 4). Also, the lecturer can give students the right to answer the
questions only once. That is, students cannot revise the answered item. Those
settings serve to increase testing security and prevent cheating. Nevertheless,
those settings are not compulsory for every test, and the administrator can
activate any settings considering the testing terms and procedures.
Monitoring Module
The system offers instant monitoring of student achievement (i.e., in
minutes or seconds). An administrator can monitor student scores (Figure 5 -
Number 1 and 2). If the test involves multiple-choice, true, false, or matching
questions, the administrator sees it as "1" if the answer is correct and "0" if it
is wrong. However, if the test contains open-ended questions, the
administrator can click Number 2 in Figure 5 and see the text form of the
student's answer.
108
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
Figure 5: Test Monitoring Module
If the user chooses the number 1 option, she will see a screen in
which, correct answer is labelled as “1” and wrong answer is “0”. In the
screen, it is also possible to prevent students to reach the questions. If the user
clicks the lock button, students can no longer access the questions. If the user
chooses the number 2 option, s/he will see a screen on which, the user can
score each question based on question total score. Additionally, students'
activities for the classroom and similar actions can be scored on this screen. If
the user clicks the 3 option, s/he will see the final scores of the students. In
this stage, it is possible to see all scores together including additional scores,
multiple chose and open ended questions.
Feedback Module
Providing feedback, which is the most significant advantage of the
given online testing system, contributes to student learning and offers
lecturers the opportunity to reflect on students' learning following a test.
Multiple-choice tests, as supportive learning tools, can be useful by providing
feedback. Following the test, the feedback module (Figure 6) separately
shows the test results (number 1 in Figure 6), test items (number 2 in Figure
6), correct answers (number 4 in Figure 6), students' answers (number 3 in
Figure 6), lecturer's comments/feedback (number 5 in Figure 6), and
supportive learning resources for the wrong answers (number 5 in Figure 6).

 
   
  
    
  
    
  
109
Figure 6: Feedback Module
The module can provide feedback for all question types, including
multiple-choice questions. In this sense, appropriate feedback and resources
should be recorded for each item. However, if required, the lecturer can add
appropriate feedback and resources after the test. Students can also see their
answers and the distribution of test scores.
Test Module
As shown in Figure 7, students can see the test items using their test
ID from anywhere with any device (computer, laptop, phone or tablet or a
platform which support IOS and android), if access to the test is allowed by
the lecturer. The most advantageous point of the system is using an easy-to-
use, simple, and understandable interface. The system was updated several
times, considering the teachers’ and students’ feedback. In the field on the
right in figure 7, there is a section for student information including name,
surname and IP address, test item numbers, remaining time, and the button to
complete the test. Students can access any item on the screen by clicking the
item number. However, the administrator must activate the "Allow access to
all test items" in the testing settings. In addition on the exam page, students
can see test instructions, explanations and comments at the top of the page
recorded by the lecturer. The given information can be edited and u pdated
immediately at any time if additional information needs for the students.

 
   
  
 
 
   

 

 

  
110
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
Figure 7: Test Module
An open-ended question above is from the "Programming Languages
I" midterm exam. Students are expected to enter the answer and click the
"Save" button. It is not mandatory to click the save button during the exam,
but it is especially important for technical problems such as power cut or
internet loss to avoid losing the marked questions or written answers. When it
is saved in the system, the item background becomes gray with the OK icon in
the right panel. Hence, the unanswered items in the test can be seen explicitly.
Since student answers are saved in the database, students can continue the test
with the administrator's approval in case of a power cut or any problem.
Exam Security
If students open a different page or leave the page during the test, s/he
is considered to have completed the test. If the time is not over, s/he can
continue the test only with the lecturer's approval. The time is measured for
each question during the test. Every item is saved, and the IP number is added
to the system with the answer. Hence, the use of different IP numbers can be
detected. The test items can be presented separately or together on one page,
depending on the lecturer's preference. It was revealed that the separate
presentation of test items considerably prevents cheating as each student sees
one item at a time, and it is challenging to find that item in the list even if it is
seen on the screen of another student.
111
CONCLUSION AND IMPLICATIONS
The effective use of technology in education has led paper-and-pencil
tests to evolve into web-based assessment systems. Assessment and
evaluation are essential and indispensable parts of education. According to
Hughes (2015), teachers should consider three points in an assessment tool:
validity (the test measures the target content), reliability (the correct
measurement of the target content), and practicality (the efficiency of a test in
terms of timing and application). Today, valid and reliable tests are commonly
used thanks to the item pools in online assessment. Web-based systems offer a
valuable testing opportunity in practical and financial terms, compared to
paper-and-pencil tests. Additionally, they can provide an instructional
contribution that is not available in paper-and-pencil tests. Pressey (1950)
stressed that tests are essential tools for assessment and the reinforcement of
the learned material, which one of the most significant instructional
advantages of online testing systems. Another benefit is the digital support of
different media tools on items. In other words, various visuals such as video,
picture, graphic, and sound can be integrated into a test item. Students also
have the chance to learn the test results immediately. However, it would not
be fair to view online testing systems only from students' perspectives. Online
testing systems substantially reduce the burden of faculty members so that
lecturers can deal with students' concerns about the test and find the
opportunity to improve in other fields.
In this regard, the current paper introduces the design, development,
and implementation steps S-GALL, an online testing system that offers a
simple and practical interface and a useful feedback module. Unlike the other
online assessment systems developed in doctoral studies in Turkey, S-GALL
is a commonly used testing system. It has been used by four lecturers and
more than 2000 students at Dokuz Eylül University. Almost 100 online tests
have been smoothly carried out in the system so far.The critical contribution
of online testing systems is a design based on the presence of the minimum
problems. The safety of test items and student information is essential for
database security. Thus, such problems were solved in S-GALL using both
the theoretical and applicable knowledge for the database and test security.
With a simple interface in S-GALL, frequently observed problems in online
assessment systems such as complicated test instructions or items were
112
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
resolved for both students and lecturers. Improvements in the interface design
of S-GALL are made depending on user feedback. S-GALL also has a test
printing module, which is not included in any online testing system. If
necessary, the system presents the opportunity to print out the test. This
module was integrated into the system upon user request and experience.
In conclusion, this study introduced an online testing system that
prioritizes feedback, data security and user experiences, offers an easy and
straightforward interface, is compatible with both desktop and mobile devices,
and supports different item types. S-GALL is expected to contribute to the
literature on online assessment systems in higher education in Turkey to be a
good alternative and model for decision-makers and designers. Future studies
are planned to address the practicality of the system, student/lecturer opinions
and attitudes, and the effect of interface differences on academic success.
113
REFERENCES
Akın, O. (2007). Web tabanlı sınav sistemi (Unpublished master’s thesis). Sakarya
Üniversitesi, Fen Bilimleri Enstitüsü, Sakarya.
Arslan, K., & Yetgin, G. (2020). Çevrimiçi değerlendirme sistemlerinin eğitimde
kullanımı: bir içerik analizi. Turkish Studies-Educational Sciences, 15(2),
651-671.
Baki, A ve Birgin, O. (2002). Matematik Eğitiminde Alternatif Bir Değerlendirme
Olarak Bireysel Gelişim Dosyası Uygulaması. 5.Ulusal Fen Bilimleri ve
Matematik Eğitimi Kongresi. Ankara: ODTÜ
Brown, G., Bull, J., & Pendleberry, M. (1997). Assessing Student Learning in Higher
Education Routledge.
Bull, J. (1999). Computer-assisted assessment: Impact on higher education
institutions. Journal of Educational Technology & Society, 2(3), 123-126.
Bull, J. (2002). Implementation and Evaluation of Computer-assisted Assessment-
final report.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
Education: principles, policy & practice, 5(1), 7-74.
Çakan, M. (2017). Eğitim sistemimizde yaygın olarak kullanılan sınav türleri. Pegem
Atıf İndeksi, 87-122
Chong, S. W. (2019). College students’ perception of e-feedback: a grounded theory
perspective. Assessment & Evaluation in Higher Education.
Conole, G., & Warburton, B. (2005). A review of computer-assisted assessment. ALT-
J, 13(1), 17-31.
Crisp, V., & Ward, C. (2008). The development of a formative scenario-based
computer assisted assessment tool in psychology for teachers:The PePCAA
project. Computers & Education, 50(4),1509- 1526.
Delaney, K. (2000). Inside Microsoft SQL Server 2000. Microsoft Press.
Ghilay, Y. & Ghilay, R. (2012). Student evaluation in higher education: A
comparison betweencomputer assisted assessment and traditional evaluation.
Journal of Educational Technology, 9(2), 8-16.
Ghilay, Y. (2017). ODL: Online distance learning of quantitative courses in higher
education.Advances in Social Sciences Research Journal, 4(18), 62-
72.https://doi.org/10.14738/assrj.418.3698
Hang, B. (2011). The design and implementation of on-line examination system,
Proceedings of the International Symposium on Computer Science and
Society (ISCCS), (pp. 227-230). doi:10.1109/ISCCS.2011.68.
Hughes, A. (2003). Testing for language teachers (2nd ed.). Cambridge, England:
114
MULTIDISCIPLINARY PERSPECTIVES IN EDUCATIONAL AND SOCIAL
SCIENCES II
Cambridge University Press.
Karahoca, A., Karahoca, D., & Günoğlu, S. (2009). Web tabanlı sınav otomasyon
sisteminin kullanılabilirlik analizi. Ulusal Yazılım Mühendisliği
Sempozyumu.
Karakaya, Z. (2001), Development and Implementation of On-line Exam for a
Programming Language Course (Master Thesis). METU, December 2001
Millî Eğitim Bakanlığı (MEB). (2018). Güçlü yarınlar için 2023 eğitim vizyonu.
Ankara: MEB.
Ozan, Ö., & Özarslan, Y. (2010). eFront Öğrenme Yönetim Sistemi. Akademik
Bilişim, 345-349.
Özturan, T. (2016). lgısayar Temellı Ölçme-değerlendırmenın İngılızce Öğretmen
Adaylarinin Sinav Başarisi Ve Tutumu Üzerıne Etkısı . Unpublished master’s
thesis, Hacattepe Üniversitesi, Fen Bilimleri Enstitüsü, Ankara
Pressey, S.L. (1950). Development and appraisal of devices providing immediate
automatic scoring of objective tests and concomitant self-instruction. Journal
of Psychology 30, 417447
Sheader, E., Gouldsborough, I., & Grady, R. (2006). Staff and student perceptions of
computer-assisted assessment for physiology practical classes. Advances in
Physiology Education, 30(4), 174-180.
Thelwall, M. (2000) Computer-based assessment: a versatile educational tool.
Computer & Education. 34(1) pp.37-49.
Thorndike. E.L. (1927) The Law of Effect. The American Journal of Psychology 39
(1/4): 21222. https://doi.org/10.2307/1415413
Ünsal, H. (2010). Yeni bir öğrenme yaklaşımı: Harmanlanmış öğrenme. Milli Eğitim
Dergisi, 185, 130-137.
Yağcı, M., Ekiz, H., & Gelbal, S. (2015). Yeni Bir Çevrimiçi Sınav Modeli
Geliştirilmesi ve Uygulanması. . Journal of Kirsehir Education Faculty. 16
(1).
Zhang, Z. V., & Hyland, K. (2018). Student engagement with teacher and automated
feedback on L2 writing. Assessing Writing, 36, 90-102.
Article
Full-text available
p align="justify">The development of e-assessment systems that can meet acceptable standards in the education industry is a present focus of researchers in the distance learning area. Cheating and impersonation in online examinations are two common challenges that have spawned diverse solutions that are usually based on continuous authentication and monitoring principles and technologies. Most of the solutions are focused on centralized e-proctoring schemes that depend massively on state-of the-art internet connectivity. However, many users of these systems, especially in the developing countries, have limited internet services. This is a major barrier to effective remote e-assessment of learners as many of these solutions fail with failing internet services. To resolve this challenge, this paper describes a Secure Offline Continuous Authentication and Monitoring(SOCAM) model for remote e-assessment. </p
Article
Full-text available
Teknolojik yeniliklerin eğitimde yaygınlaşmasıyla birlikte, uzaktan eğitimin bir parçası olan online değerlendirme sistemleri kağıt-kalem testlerine bir alternatif olarak görülmeye başlanmıştır. Çevrimiçi değerlendirme sistemleri, soruların internet aracılığıyla öğrencilere iletildiği, fare ve klavye kullanılarak cevapların verildiği sistemler olarak tanımlanabilir. Bu çalışma, çevrimiçi değerlendirme ve test konusunda yapılan çalışmaların içerik analizi yöntemiyle değerlendirilmesini amaçlamaktadır. Bu amaç doğrultusunda, Türkiye’de ve yurtdışında 1987-2018 yılları arasında Web of Science, Google Scholar, Ulakbim TR-Dizin, OPAC ve YÖK Tez veri tabanlarında taranan yüksek lisans, doktora ve araştırma çalışmalarını kapsayan toplam 86 yayın ayrıntılı olarak incelenmiştir. Elde edilen verilerin analizinde betimsel istatistik yöntemler arasında yer alan yüzde ve frekans kullanılmıştır. İnceleme ve analizler, “Genel Özellikler”, “Çalışma Alanı” ve “Yöntem” başlıkları altında toplam 9 ölçüte göre yapılmıştır. İçerik analizi sonucunda son yıllarda çevrimiçi değerlendirme sistemleri konusunda yapılan çalışmalarda artış olduğu, bu çalışmaların büyük bir kısmında nicel araştırma deseniyle birlikte analizlerde genellikle yorumlayıcı istatistikler kullanıldığı, bu çalışmaların başlıca ABD, Türkiye ve İngiltere tarafından yapıldığı ve örneklem grubu olarak çoğunlukla lisans öğrencilerinin tercih edildiği tespit edilmiştir. Ayrıca yürütülen çalışmalarda konu alanı olarak çoğunlukla online anketler yoluyla öğretim üyesi ve/veya dersi değerlendirme araştırma konusu olurken, öğrenci performansının online testler yoluyla değerlendirilmesinin yeterli düzeyde araştırılmadığı belirlenmiştir. Bu durumun, öğretim yönetim sistemlerinin örgün eğitim sürecine yeterince dahil edilmemesinin bir sonucu olduğu düşünülmektedir.
Article
Full-text available
The study examined a new asynchronous model for online learning of quantitative courses in higher education, which is a complete substitute for face-to-face learning. The model is called ODL (Online Distance Learning) and is based on a combination of the following successful previous models: CTBL: Comprehensive Technology-Based Learning (Ghilay, 2017). FBL: Feedback Based Learning (Ghilay & Ghilay, 2015a). OTLA: Online Teaching, Learning and Assessment (Ghilay & Ghilay, 2013). The study is based on two samples of students who attended college and participated in an ODL-based course (n1 =37) and face-to-face learning of the same course (n2=67). Students who participated the ODL course, were asked to answer a questionnaire focused on three areas: the course components, characteristics of learning process and time investment. Besides, achievements of the ODL course were compared to the same course in a face to face format. The research reveals that according to students' views, the new model is very helpful for their studying process. Beyond that, it was found that achievements in the ODL course were at least equal to those of a face-to-face course. The results indicate that it is worthwhile to adopt the new model in institutions of higher education.
Article
Full-text available
The study examined advantages and disadvantages of computerized assessment compared to traditional evaluation. It was based on two samples of college students being examined in computerized tests instead of paper-based exams. Students were asked to answer a questionnaire focused on test effectiveness, experience, flexibility and integrity. Concerning each characteristic, responders were asked to relate to both kinds of evaluation (computerized and traditional). Furthermore, students were asked to evaluate home and classroom computerized exams. The research reveals that there is a significant advantage to computerized assessment in comparison to paper-based evaluation. The most powerful advantage of computer assisted assessment found in the research is a test flexibility. The research findings point out that there is significant worthiness to adopt computerized assessment technologies in higher education, including home exams.
Conference Paper
Full-text available
Epistemolojik kuramlardaki değişmelere bağlı olarak öğrenmenin ölçülmesi ve değerlendirilmesinde de yeni yaklaşımlar gündeme gelmektedir. Artık sadece öğrencinin sınavlardaki cevaplarına bakılarak karar verilemeyeceği constructivist (yapısalcı) yaklaşımı benimseyen eğitimciler tarafından kabul edilmektedir. Öğrencinin bireysel ve grup olarak gösterdiği performansı da değerlendirmeye katılmalıdır. Bunun bir sonucu olarak “bireysel gelişim dosyası” (portfolio assessment) uygulamaları eğitim-öğretim sürecinde gittikçe yaygınlaşmaktadır. Bireysel gelişim dosyası ile değerlendirme, öğrencinin bir veya birkaç alandaki becerilerini belli bir süreç içinde yapmış olduğu çalışmaların veya gösterdiği davranışları düzenli ve birikimli olarak toplanması ile elde edilen delillerin önceden belirlenen kriterlere göre değerlendirilmesidir. Bu çalışmada, literatüre dayalı olarak bireysel gelişim dosyasının tanımı, kullanılma türleri ve eğitime sunduğu avantajları ve dezavantajları ayrıca gelişim dosyasının içeriğinin nasıl seçileceği ve düzenleneceği, toplanan delillerin nasıl değerlendirileceği ile ilgili bilgi verilmektedir. Matematik dersi için geliştirilen bireysel gelişim dosyasının Trabzon Söğütlü ilköğretim okulunda 2 haftalık uygulamasına ilişkin olarak klinik mülakat yöntemiyle öğretmenin görüşleri alındı. Elde edilen veriler doğrultusunda bireysel gelişim dosyasının öğrenci performansını izleme ve değerlendirmede etkili bir teknik olup olmadığı geleneksel ölçme ve değerlendirme teknikleri ile karşılaştırılarak tartışılmaktadır. Bu karşılaştırmalar, bireysel gelişim dosyaları geleneksel ölçme değerlendirme araçlarına göre öğrencinin öğrenmesi hakkında daha geniş ve ayrıntılı bir resim çekme ve tanıma fırsatı sağladığını ortaya koymaktadır. Ayrıca elde edilen veriler yolu ile de öğrenci hakkında dinamik bir veri tabanı oluşturulabilmektedir. Sonuç olarak, bireysel gelişim dosyalarının eğitim sistemimizde bir değerlendirme aracı ve tekniği olarak kullanımı hizmet öncesi kurslarla öğretmen adaylarına ve hizmet içi kursları ile de öğretmenlere tanıtılması ihtiyacı gittikçe önem kazanmaktadır.
Book
This book provides an accessible guide to concepts in language testing and the testing of specific skills and systems. It combines theory and practical recommendations to help teachers understand the principles of testing and how they can be applied, supporting them to write better tests. The third edition has been extensively revised and updated to reflect recent developments in the field, while retaining the straightforward approach that made the earlier editions essential reading for trainee and experienced teachers alike. It features new content on technology, including computer adaptive testing and the use of automated scoring for all skills. It also includes an extended discussion of language testers' responsibilities, new chapters on non-testing methods of assessment and a checklist to help teachers choose tests.
Article
The increasing prominence of technology has given rise to new ways for writing teachers to give feedback electronically. Specifically, this article focuses on electronic written feedback (e-feedback) given to a group of English-as-a-Second-Language (ESL) community college students. Although previous studies have investigated the effectiveness of different computer-mediated feedback practices (e.g., video feedback, audio feedback, multimodal feedback), there is a dearth of research which examines the effectiveness of e-feedback and lower-ability students’ perception of e-feedback in ESL post-compulsory writing classrooms which adopt a process writing approach. Adopting grounded theory as the methodology and a tripartite definition of written feedback as the conceptual framework, the present study reports students’ perception of e-feedback on Google Docs from two sources: students’ written reflections and semi-structured, focus group interviews. Implications related to e-feedback practices will be discussed.
Article
Research on feedback in second language writing has grown enormously in the past 20 years and has expanded to include studies comparing human raters and automated writing evaluation (AWE) programmes. However, we know little about the ways students engage with these different sources of feedback or their relative impact on writing over time. This naturalistic case study addresses this gap, looking at how two Chinese students of English engage with both teacher and AWE feedback on their writing over a 16-week semester. Drawing on student texts, teacher feedback, AWE feedback, and student interviews, we identify the strengths and weaknesses of both types of feedback and show how engagement is a crucial mediating variable in the use students make of feedback and the impact it has on their writing development. We argue that engagement is a key factor in the success of formative assessment in teaching contexts where multiple drafting is employed. Our results show that different sources of formative assessment have great potential in facilitating student involvement in writing tasks and we highlight some of these pedagogical implications for promoting student engagement with teacher and AWE feedback.
Article
Bu araştırmanın amacı; kişi ve kurumların çok fazla zamanını alan sınavların yapılması ve değerlendirilmesi işlemlerini gerçekleştirecek güvenli, etkili ve verimli bir Çevrimiçi Sınav modeli geliştirmek ve uygulamaktır. Bu çalışmada çevrimiçi sınav sistemi ve kağıt kalem ile yapılan sınavlar arasındaki farklılıkları tespit etmek ve sınavı hem internet ve hem de güvenli ağ üzerinden uygulayabilmek amacıyla çoklu ortam desteği olan ve kolay kullanılabilen bir arayüze sahip, öğrenci, ders, soru, sınav ve not bilgilerini güvenli bir şekilde tutan; güncellenebilir veritabanı yapısına sahip bir çevrimiçi sınav sistemi geliştirilmiştir. 2010-2011 bahar yarıyılında Ahi Evran Üniversitesi MYO İşletme Bölümü öğrencilerinin Bilgisayar Büro Programları dersi vize ve final sınavlarında kullanılan “Çevrimiçi Sınav Sistemi” sınav uygulama, değerlendirme ve sınav istatistiklerini çıkarma işlemlerinin hızlı ve zahmetsiz bir şekilde yapılmasını sağlamıştır. Madde güçlük ve ayırt edicilik değeri istenilen düzeyde olmayan maddelerin soru bankasından çıkartılması ile yapılan sınavların güvenirliği ve geçerliği sağlanmıştır.
Article
From the Publisher:This newly updated, official guide to the core architecture and internals of Microsoft SQL Server 2000 helps readers unlock the full power of Microsoft’s premier relational database management system. Written by a renowned SQL Server guru, in conjunction with the Microsoft SQL Server 2000 product development team, Inside Microsoft SQL Server 2000 is a must-read for developers and IT professionals who need to understand Microsoft SQL Server from the inside out. This comprehensive guide provides updated, authoritative advice for installing, administering, and programming with SQL Server 2000. It also includes information about significant product enhancements, and new chapters about SQL Server Indexes and Query Optimization. The CD contains product evaluation documentation, sample code and scripts, white papers, and a benchmarking kit.