ArticlePDF Available

Ubiquitous testing using tablets: its impact on medical student perceptions of and engagement in learning

Authors:

Abstract and Figures

Purpose: Ubiquitous testing has the potential to affect medical education by enhancing the authenticity of the assessment using multimedia items. This study explored medical students’ experience with ubiquitous testing and its impact on student learning. Methods: A cohort (n=48) of third-year students at a medical school in South Korea participated in this study. The students were divided into two groups and were given different versions of 10 content-matched items: one in text version (the text group) and the other in multimedia version (the multimedia group). Multimedia items were delivered using tablets. Item response analyses were performed to compare item characteristics between the two versions. Additionally, focus group interviews were held to investigate the students’ experiences of ubiquitous testing. Results: The mean test score was significantly higher in the text group. Item difficulty and discrimination did not differ between text and multimedia items. The participants generally showed positive responses on ubiquitous testing. Still, they felt that the lectures that they had taken in preclinical years did not prepare them enough for this type of assessment and clinical encounters during clerkships were more helpful. To be better prepared, the participants felt that they needed to engage more actively in learning in clinical clerkships and have more access to multimedia learning resources. Conclusion: Ubiquitous testing can positively affect student learning by reinforcing the importance of being able to understand and apply knowledge in clinical contexts, which drives students to engage more actively in learning in clinical settings.
Content may be subject to copyright.
57
ORIGINAL ARTICLE
Ubiquitous testing using tablets: its impact on medical
student perceptions of and engagement in learning
Kyong-Jee Kim and Jee-Young Hwang
D
epartment of Medical Education, Dongguk University School of Medicine, Gyeongju, Korea
Purpose:
Ubiquitous testing has the potential to affect medical education by enhancing the authenticity of the assessment using
multimedia items. This study explored medical students’ experience with ubiquitous testing and its impact on student learning.
Methods:
A cohort (n=48) of third-year students at a medical school in South Korea participated in this study. The students were
divided into two groups and were given different versions of 10 content-matched items: one in text version (the text group) and
the other in multimedia version (the multimedia group). Multimedia items were delivered using tablets. Item response analyses were
performed to compare item characteristics between the two versions. Additionally, focus group interviews were held to investigate
the students’ experiences of ubiquitous testing.
Results:
The mean test score was significantly higher in the text group. Item difficulty and discrimination did not differ between
text and multimedia items. The participants generally showed positive responses on ubiquitous testing. Still, they felt that the lectures
that they had taken in preclinical years did not prepare them enough for this type of assessment and clinical encounters during
clerkships were more helpful. To be better prepared, the participants felt that they needed to engage more actively in learning in
clinical clerkships and have more access to multimedia learning resources.
Conclusion:
Ubiquitous testing can positively affect student learning by reinforcing the importance of being able to understand
and apply knowledge in clinical contexts, which drives students to engage more actively in learning in clinical settings.
K
ey Words
:
Educational measurement, Multimedia, Ubiquitous testing, Assessment
Received: September 18, 2015 Revised: November 17, 2015 Accepted: December 8, 2015
Corresponding Aut hor: Jee-Young Hwang (http://orcid.org/0000-0003-1491-8413)
Department of Medical Education, Dongguk University School of Medicine, 123 Dongdae-ro,
Gyeongju 38066, Korea
Tel: +82.54.770.2415 Fax: +82.54.770.2447 email: hwangmd@dongguk.ac.kr
Korean J Med Educ 2016 Mar; 28(1): 5 7-66.
http://dx.doi.org/10.3946/kjme.2016.1028.1.5 7
eISSN: 2005-7288
The Korean Society of Medical Education. All rights reserved.
This is an open-access article distributed under the terms of the
Creative Commons Attribution Non-Commercial License (http://
creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted
non-commercial use, distribution, and reproduction in any medium,
provided the original work is properly cited.
Introduction
Authentic assessment is central to medical licensure
examinations to ensure that examinees clinical compe-
tencies are assessed. The adoption of computer-based
testing using multimedia such as images, sounds, and
video clips allows the presentation of clinical findings in
a more authentic and undigested format than when they
are presented in text items [1]. Multimedia-based
presentation of clinical findings in assessment items,
henceforth multimedia items, enhances the authenticity
of the assessment, which leads to more valid assessment
of examinees clinical competencies [2]. Multimedia
items have been implemented in some medical licensure
exams such as the United States Medical Licensing
Examination and studies have shown that assessment
using multimedia items is feasible and can measure some
constructs different from what text items can measure
[2,3].
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
58
Korean J Med Educ 2016 Mar; 28(1): 57-66.
In South Korea, multimedia items are to be introduced
in the national medical licensure exam by the year 2020.
With computer-based testing not yet introduced in the
medical licensing exam in South Korea, the use of tablets
is under consideration. Using tablets for assessment is
expected to be efficient because they are ubiquitous and
thus require less financial resources and space [4].
Therefore, ubiquitous testing, in which assessment is
delivered by ubiquitous computing technologies such as
smartphones and tablets, is practical in nationwide tests
such as healthcare personnel licensure exams.
The most salient change likely with the use of ubi-
quitous testing in assessment in medical education is the
adoption of multimedia items, which is expected to
influence how students learn because assessment drives
and even enhances learning [5,6]. In particular, it is
speculated that teaching and learning in clinical contexts
will be more emphasized with the adoption of multi-
media items, as examinees will need to be able to
interpret clinical findings in an authentic format similar
to that are encountered in clinical practice [7]. Yet,
relatively little known about student experience with this
new type of assessment that offers empirical evidence to
support this assumption. Therefore, this study investi-
gated students experience with ubiquitous testing using
tablets and its impact on their perceptions of and
engagement in learning. Our research questions were (1)
Do the item characteristics differ across text items and
multimedia items using ubiquitous testing? (2) What are
students perceptions of the benefits and challenges of
ubiquitous testing? (3) Do students perceptions of and
engagement in learning change after they experience
ubiquitous testing?
It is expected that the present study help enhance our
understanding of the validity of ubiquitous testing. To
adopt an assessment tool, we need to evaluate its
validity. Yet, there is a paucity of evidence to support
the validity of multimedia items when they are delivered
in the form of ubiquitous testing. One of the standards
in establishing the validity of an educational assessment
is to collect evidence on its impact on examinees and on
teaching and learning [8]. Therefore, this study aimed to
gain more insight into the feasibility of ubiquitous
testing by exploring its impact on medical education.
Subjects and methods
A cohort of all Year 3 students (n=48) in the 4-year
medical program at Dongguk University Medical School
(DUMS) in South Korea participated in this study. The
curriculum at DUMS consists of mostly lecture-driven
preclinical courses for the first 2 years, followed by 2
years of clinical clerkships. The participants were
attending core clinical clerkships in one of the two
academic medical centers affiliated with the university
when this study was conducted. Both undergraduate-
entry (n=18) and graduate-entry (n=30) students were
admitted in the study cohort.
Participants were given 10 content-matched items in
either multimedia or text versions in multiple-choice
question format. These items were in the domain of
clinical knowledge for general practitioners and were
developed by one of the authors. The 10 items consisted
of two items on clinical anatomy on laparoscopic
operation (organ, artery), four items on physical
examination findings (ear, knee, brain, and muscle), two
items on real time sonography findings (heart, fetus),
one item on chest auscultation finding (heart and lung),
and one item on patient encounter (abdominal pain). Fig.
1 shows sample item on auscultation in text and
mul t i media format s .
The participants were randomly assigned to one of two
groups according to their placement of clinical rotations
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
59
Fig. 1. A Sample Item on Auscultation in Text (Top) and Multimedia (Bottom) Formats
Fig. 2. A Picture of a Student Taking the Ubiquitous Testing Using
Tablet
at the time of the study. The control group (the text
group) took the test in the conventional text format and
the experimental group (the multimedia group) was
given multimedia items. Nine video clips and one sound
clip of chest auscultation were incorporated in the
mul t i media items. The multimedia it ems were del i vered
using 10.1 inch tablets with Android operating system
offered by the schools medical education committee
(Fig. 2). The ubiquitous testing system was developed by
a vendor, NSDevil (Seoul, Korea) and was equipped with
features for examinees to see the elapse of time, navigate
between items, mark on skipped items, and take notes.
The multimedia group used headphones to listen to audio
clips of auscultation findings and video clips of inter-
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
60
Korean J Med Educ 2016 Mar; 28(1): 57-66.
views or physical examinations, whereas for the text
group these findings were described in texts using
standard terminology.
The participants took the test in their placement of
clinical rotations in September 2014. Students at DUMS
had taken tests in the pencil-and-paper format, and thus
had not encountered any multimedia items prior to
participation in this study. This was a pilot test and
students test scores did not count towards their course
grades. Institutional Review Board (IRB) approval was
not requested for the present study, because it fell under
the general exemption from our IRB for educational
outcomes data.
Item analyses (test score, item difficulty, item discri -
mination, and response time) were performed and
Student t-test was used to compare characteristics
between text items and multimedia ones. For baseline
comparison, participants cumulative grade point aver-
ages through Years 1 and 2 were compared across the
two groups using Student t-test. Additionally, distri -
butions of participant backgrounds (i.e., gender, entry-
level, and ages) were compared across the two groups
using chi-square analysis. IBM-SPSS version 20 (IBM
Corp., Armonk, USA) was used and the significance level
was 0.05 for the statistical analysis.
Additionally, we conducted focus group interviews of
the participants in the multimedia group to investigate
their experiences of ubiquitous testing. Focus group
interview is a qualitative research method for collecting
data on the phenomenon being studied by analyzing
conversations among study participants [9]. We used the
qualitative research method as it is known to be useful
in eliciting student perspective [10]. Although all 24
participants in the multimedia group were placed in the
focus groups to ensure diversity of opinions elicited
from the study, they were divided into three groups to
give groups small enough to ensure input from all
participants.
The interviews were conducted in a semi-structured
format with 12 questions: three related to participant
perceptions of item characteristics of multimedia items,
five to students emotional responses to ubiquitous test-
ing, which is known to be linked to student learning and
performance (e.g., enjoyment, anxiety, and boredom)
[11], and four to changes in participant perceptions of
and engagement in learning after experiencing ubi-
quitous testing. The interview questions are presented in
Appendix 1.
The interviews were conducted 3 weeks after the
participants took the test to investigate the impact of
their experience with ubiquitous testing on their learning
behaviors. Each interview session took 40 to 45 minutes.
Although one of the authors participated in the
interview sessions, he did not play an active role during
the interview sessions and let one of the participants in
the group facilitate the discussions during the interview
and the author observed the sessions to minimize the
possibility of him making an impact on the discussions.
All the interviews were audio recorded and were
transcribed by the authors. The transcripts were analyzed
and emerging themes were identified from the data using
the thematic analysis method.
Results
1. Study participants
Table 1 shows the study participants demographics
and the results of comparisons of baseline performance
across the two groups. There was no statistically
significant difference in participants academic perfor-
mance or distributions of participant demographics
between the two groups.
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
61
Table 1. Cross-Tabulation of Participant Demographics and Academic Performance
Variable Text group (n=24) Multimedia group (n=24) p-value
Entry-level
Undergraduate 9 9 1.00
Graduate 15 15
Sex
Male 13 14 0.77
Female 11 10
Age 26.54±3.24 26.21±3.11 0.72
GPA 3.18±0.59 3.10±0.48 0.61
Data are presented as number or mean±standard deviation.
GPA: Grade point average.
Table 2. Mean Scores, Item Difficulty, and Item Discrimination for Text and Multimedia Items
Item Text group (n=24) Multimedia group (n=24) p-value
Test score (10 items) 6.25±1.54 5.08±1.56 0.012
Item difficulty 0.63±0.20 0.51±0.25 0.261
Item discrimination 0.33±0.18 0.33±0.25 0.984
Data are presented as number or mean±standard deviation.
2. Item analyses
Table 2 compares item responses across the two
groups. The mean test score of the multimedia group was
significantly lower than that of the text group. The mean
item difficulty of the text items was higher than that of
multimedia items and the mean item discrimination was
comparable between the two groups. Still, there were no
statistically significant differences in item difficulty and
discrimination. The mean total response time for the 10
items was longer in the multimedia group than in the
text group (14.98 minutes vs. 5.79 minutes).
3. Benefits and challenges of ubiquitous
testing using tablets
Participants generally showed some positive responses
to ubiquitous testing. Most mentioned they were a little
anxious about this new type of assessment in the
beginning, but they quickly became comfortable because
they were used to mobiles devices. The participants felt
using tablets was more convenient than pencil-and-
paper tests as the display quality is much better in tablets
than in papers and they did not have to mark answers on
the optical mark recognition sheet, which sometimes
they could make errors. Additionally, some participants
pointed out the feature in the tablet system that showed
the time elapsed was helpful.
Moreover, participants pointed out they would likely
make less mistakes in ubiquitous testing than in tradi-
tional tests. One student stated, I have more chances to
make mistakes in taking a test when a lot of items are
presented. In this type of (ubiquitous) testing, only one
item is shown on each page, so that helps me concentrate
on the question one by one, and that helps me make
fewer mistakes. Other students also agreed with the
statement that they could pay better attention to each
item on the test in ubiquitous testing and felt that would
make them less likely to make mistakes. Additionally,
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
62
Korean J Med Educ 2016 Mar; 28(1): 57-66.
some even found it interesting to view video clips in the
items because they felt like they were watching what was
really happing in the clinical settings. Yet, some
participants mentioned that they felt more time pressure
in ubiquitous testing because they needed time to view
video clips in the items.
Most of the participants pointed out that ubiquitous
testing will be beneficial in enhancing their clinical
competencies. One student stated, Physical exams is
fundamental in clinical competencies, and we cant learn
physical exams merely by seeing some images. So, if we
could learn how physical exams are performed by
watching some clinical videos and if we were tested that
way, we could learn better about physical exams and that
could enhance our clinical competencies. That way we
could be assessed on our clinical competencies more
effectively.
4. Impact of ubiquitous testing on student
perceptions of and engagement in learning
The participants reflected on how the teaching and
learning in preclinical years prepared them for ubi -
quitous testing. Most of the participants felt that what
they had learned from lectures in the preclinical years
was insufficient for them to perform well in ubiquitous
testing. One student noted I lost confidence (while
taking the ubiquitous testing) feeling that I knew only
superficially. There was a video clip of a physical exam
in one of the items. I knew what that exam was, but I
did not know its procedures well enough, so that was a
difficult question for me. Another student added, Most
of the images presented in the tests that I had taken
(before the ubiquitous testing) were familiar ones
because they were similar to those in textbooks. But,
those in multimedia items were not something that I had
seen in textbooks or somethings that I saw during
clinical rotations, so these were difficult to me.
Most of the participants mentioned that what they
learned from clinical encounters during their clerkships
helped them more than lectures to prepare for the
ubiquitous testing. One student mentioned I think
lectures that I took in Year 2 courses did not help much.
What I learned during clerkships when the preceptors
had us do auscultations, explained us about clinical
images to interpret them, and by observing surgery in
the operation room was more helpful.
The participants also highlighted the need for more
multimedia learning resources to help them prepare for
ubiquitous testing. One stated that We viewe d some
clinical videos during lectures when we were Year 2. I
did not pay a lot of attention to those videos then and
skimmed them over, because they would not be in the
exams. But, now with ubiquitous testing, such video clips
will be very helpful in preparing for this kind of test.
Yet, most of the participants pointed out the lack of
available multimedia learning resources. As one student
mentioned, I could do well on the test about ausculta-
tions and anatomical structures that I saw from actual
cases that I encountered during clinical clerkships, but I
dont think I can do well on others. I cannot come across
every case that I need to know during clinical clerkships,
so it would be more helpful if there are more multimedia
learning resources available for us.
The participants reflected on their engagement in
clinical rotations during the past 3 weeks after they
experienced ubiquitous testing. Some participants men-
tioned that such an experience did not influence their
learning behaviors because they knew multimedia items
would be introduced after they graduate from medical
school. Still, some participants mentioned that their
experience with ubiquitous testing made them think
about their learning behaviors in clinical clerkships. One
participant noted that This (ubiquitous testing) would
have been a much more difficult test if I had taken it in
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
63
Year 1 or 2, but I could answer some of the questions
from what I learned during clinical rotations. So, I was
thinking that I will need to change my attitude toward
learning during clinical clerkships if I take this kind of
test.
In addition, some participants reported that they
engaged in learning in clinical clerkships more actively
after they experienced ubiquitous testing. One student
mentioned that I attended clinical rotations in oto-
rhinolaryngology last week and there were a lot of phy-
sical exams done there. Those were something that could
be in multimedia items, so I tried to learn more about the
principles behind those exams. So, I asked the preceptors
questions more often about why the physical exam is
performed and how it is done and I tried to observe the
anatomical structures more closely when I was attending
surgery in the operation room. Now, I try to learn things
more precisely and do not want to skip things that are
not clear to me from what I see during clinical
rotations.”
Discussion
Our study found that the mean test score of the
multimedia group was lower than that of the text group,
although there were no significant differences in item
difficulty or discrimination between the text and multi-
media items. These findings are similar to those found
from studies of computer-based testing [2,3]. Further-
more, we found that the item difficulty was generally
higher in the text items, but it was higher in the
multimedia format in three items, which pertained to the
sonographic structure of heart, muscle examination, and
patient encounter. These findings are consistent with
those from previous studies that multimedia items are
more difficult than text items when text descriptions
using textbook terminologies are replaced by multimedia
presentations, such as in auscultation findings, and that
multimedia items are easier when multimedia presen-
tations provide richer information on the conditions of
the patient than in text descriptions, such as in a video
clip portraying hip flexion [2,3].
The total response time was longer in the multimedia
group, which is consistent with findings from previous
studies [2,3]. We speculated that the students needed to
devote more time and effort to interpreting information
from multimedia presentations due to their unfamiliarity
with such presentations. This experience may have
motivated the students to reflect upon their learning
behaviors and prompted them to think about how they
should change their learning behaviors to adapt to this
new type of assessment.
Students generally showed positive responses to ubi-
quitous testing. They quickly became comfortable with
this new technology, some even found it interesting to
take a test involving clinical videos, and some felt their
clinical competence could be assessed better in this
testing format. These findings are consistent with those
of Roh et al. [4] who found that students showed positive
reactions to their experience with ubiquitous testing.
However, our study revealed student perceptions that
traditional teaching and learning methods have not
adequately prepared them for this type of assessment.
This highlights the need for medical schools to support
student learning by providing more multimedia re -
sources.
Our study confirms our speculation that ubiquitous
testing can positively affect student learning by re-
inforcing the importance of being able to understand and
apply knowledge in clinical contexts, which drives
students to engage more actively in learning in clinical
settings. Possibly, the method of assessment rather than
the delivery medium influenced the students perceptions
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
64
Korean J Med Educ 2016 Mar; 28(1): 57-66.
and attitudes on learning. Whether it is computer-based
testing or ubiquitous testing, multimedia presentations in
test items make assessment more authentic, which
stimulates student awareness of the importance of being
able to understand and apply knowledge in clinical
contexts. This would help move students up the ladder in
their learning and performance from knowing to
knowing how in the Millers pyramid [12].
Three study limitations can be discerned. First, this
study was performed with a small number of students in
one medical school who had had no experience with
computer-based testing and little clinical exposure dur-
ing preclinical years. The extent to which students are
experienced in computer-based testing and exposed to
clinical settings from early on in their medical studies
may differ across medical schools. Therefore, readers
should take such differences into consideration when
generalizing our findings to their own contexts. Second,
participants in our study generally felt comfortable with
ubiquitous testing because they were already familiar
with ubiquitous technologies. Hence, our findings may
not be generalized to students who are new to mobile
technology or in locations with limited access to Wifi or
mobile network for displaying multimedia contents.
Third, the item analyses were conducted with a small
sample of students using a small number of items. We
did not attempt to ensure the statistical power in our
analysis of the quantitative data as it was a pilot study
conducted for the subsequent qualitative study of the
students experiences with ubiquitous testing. Future
research is warranted through larger scale studies to
ensure statistical power and enhance the generalizability
of our findings.
Although our study indicates students positive re-
sponses to ubiquitous testing and its potential to make a
positive impact on their perceptions of learning, our
study did not investigate whether such an impact actually
leads to better clinical performance, which it was beyond
the scope of our study. Therefore, future research is
recommended that investigates whether the changes in
student perceptions of and engagement in learning
driven by the adoption of ubiquitous testing leads to
better learning outcomes.
Acknowledgements:
None.
Funding:
None.
Conflicts of interest:
None.
References
1. Peterson MW, Gordon J, Elliott S, Kreiter C. Computer-
based testing: initial report of extensive use in a medical
school curriculum. Teach Learn Med 2004; 16: 51-59.
2. Shen L, Li F, Wattleworth R, Filipetto F. The promise
and challenge of including multimedia items in medical
licensure examinations: some insights from an empirical
trial. Acad Med 2010; 85(10 Suppl): S56-S59.
3. Holtzman KZ, Swanson DB, Ouyang W, Hussie K,
Allbee K. Use of multimedia on the step 1 and step 2
clinical knowledge components of USMLE: a controlled
trial of the impact on item characteristics. Acad Med
2009; 84(10 Suppl): S90-S93.
4. Roh H, Lee JT, Rhee BD. Ubiquitous-based testing in
medical education. Med Teach 2015; 37: 302-303.
5. Wood T. Assessment not only drives learning, it may also
help learning. Med Educ 2009; 43: 5-6.
6. Larsen DP, Butler AC, Roediger HL 3rd. Test-enhanced
learning in medical education. Med Educ 2008; 42:
959-966.
7. Huh S. Can computerized tests be introduced to the
Korean medical licensing examination? J Korean Med
Assoc 2012; 55: 124-130.
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
65
8. Downing SM. Validity: on meaningful interpretation of
assessment data. Med Educ 2003; 37: 830-837.
9. Stalmeijer RE, Mcnaughton N, Van Mook WN. Using
focus groups in medical education research: AMEE Guide
No. 91. Med Teach 2014; 36: 923-939.
10. Barbour RS. Making sense of focus groups. Med Educ
2005; 39: 742-750.
11. Pekrun R. The control-value theory of achievement
emotions: assumptions, corollaries, and implications for
educational research and practice. Educ Psychol Rev
2006; 18: 315-341.
12. Miller GE. The assessment of clinical skills/competence/
performance. Acad Med 1990; 65(9 Suppl): S63-S67.
Kyong-Jee Kim and Jee-Young Hwang : Ubiquitous testing using tablets
66
Korean J Med Educ 2016 Mar; 28(1): 57-66.
Appendix 1. Focus Group Interview Questions (Only with Multimedia Group Participants)
1. Do you think multimedia items were easier or more difficult than text items?
2. Which do you think can better assess your clinical competence
text or multimedia items?
3. What type of multimedia items were the most difficult for you?
4. In which format do you think you can do better in the test
text or multimedia items?
5. Did you become more or less confident about the test after you took ubiquitous testing?
6. Do you think you can make more mistakes when you take ubiquitous testing than in traditional tests?
7. Did you get more nervous when you took the ubiquitous testing?
8. Did you feel more interested when you took the ubiquitous testing?
9. How did you feel about taking the test using tablets? What were the advantages and challenges?
10. Do you feel the courses that you had taken in the medical school prepared you well for ubiquitous testing? If not, what were those
courses, and what courses helped you the most?
11. Do you think you would prepare for ubiquitous test the same way as you did with traditional tests? What would you do differently?
12. Do you think the experience of ubiquitous testing has affected your learning behavior in clinical clerkships?
... Since 2011, the Ministry of Education of Korea has been promoting computer-based test (CBT) evaluations using smart technology [2]. The Korea Health Channel Licensing Examination Institute used ubiquitous-based tests (UBT)-in which testing, scoring, and grade management are performed using tablet computers-to conduct the Clinical Pathology Mock Tests [5] in 2011, and the Doctor National Examination Mock Tests [6] in 2016. It has also been using UBTs for the Emergency Medical Examination (level 1) National Exams since 2017 [7]. ...
... For comparison purposes, the National Examination for Doctors had 3318 applicants in 2019. It is difficult to evaluate applicants' practical skills in the National Nurse Examination because student preparation time for the practical exam is more than six times that required for the national exam [5]. According to a study, the evaluation of practical skill performance requires the same number of evaluators as the number of applicants and that the involvement of evaluators in subjective evaluations can also lead to real-world difficulties in fairness based on factors such as location, budget, time, and manpower [10]. ...
... A lot of research is being carried out on this. In Korea, studies were conducted on tests using a tablet computer for medical [5,[16][17][18] and emergency rescue students [19]. These studies showed high satisfaction among the examinees due to the convenience in the administration of tests using a tablet computer. ...
Article
Full-text available
Learning evaluation using ubiquitous-based tests may be essential during a public health crisis, such as the COVID-19 pandemic, during which theoretical classes and clinical practice are conducted online. However, students may not be as familiar with ubiquitous-based tests as they are with paper-based tests. This survey study aimed to evaluate students’ satisfaction with ubiquitous-based tests and compare the evaluation results of a paper-based test with that of a ubiquitous-based test in nursing education. For the midterm exam of the Women’s Health Nursing course, a paper-based test was conducted, while a ubiquitous-based test using a tablet computer was used for the final exam. The Ubiquitous-Based Test Usefulness and Satisfaction tool, which has a five-point Likert-type response scale, was employed to evaluate the post-test usefulness and satisfaction scores of the ubiquitous-based test. The mean score of the ubiquitous-based test usefulness was 4.01 ± 0.67. There was a significant difference in satisfaction levels between the ubiquitous-based and the paper-based test (t = −3.36, p = 0.001). Specifically, the evaluation scores were not affected by different evaluation methods. Study participants deemed the ubiquitous-based test highly useful and satisfactory, suggesting that such tests may be a future-oriented evaluation method, potentially replacing paper-based tests.
... In order for SBT to be recognized as a useful and efficient assessment tool in the field of medical education and to establish its status, various kinds of studies are required, and the actual characteristics of SBT must be identified. Studies on SBT as an assessment tool are gradually increasing and domestic and international studies have been published on smart device ownership, proficiency, preference, and so forth, or its relation to existing pencil-paper tests [10][11][12][13]. However, previous studies in Korea limited either the length of use or the number of people involved. ...
... In the previous study, it was found that there was no significant relationship between SBT exam scores and ownership of smart devices, and that SBT exam scores were neither affected by gender nor smart device proficiency. In addition, the association between accumulative rank in the pencil-paper test and SBT scores was found to be significant [11], and a study by Kim and Hwang [12], proved that the difficulty of pencil-paper test and the multimedia test using tablets was not significantly different [12]. Therefore, it is believed that SBT will not have any special effect on the results even if it replaces the existing pencil-paper test. ...
... In the previous study, it was found that there was no significant relationship between SBT exam scores and ownership of smart devices, and that SBT exam scores were neither affected by gender nor smart device proficiency. In addition, the association between accumulative rank in the pencil-paper test and SBT scores was found to be significant [11], and a study by Kim and Hwang [12], proved that the difficulty of pencil-paper test and the multimedia test using tablets was not significantly different [12]. Therefore, it is believed that SBT will not have any special effect on the results even if it replaces the existing pencil-paper test. ...
Article
Full-text available
Purpose: This study aims to understand the characteristics of smart device-based testing (SBT) by comparing the typical characteristics of students' satisfaction with SBT, its usefulness, advantages, and disadvantages when compared with existing testing methods. Methods: A total of 250 students from the first to third year were selected as the final targets of the study and the questionnaire was developed by faculty members who participated in the survey from the start of the SBT. The total number of questions is 12, and the questionnaire used a 4-point scale. The data obtained were analyzed using the IBM SPSS ver. 23.0 (IBM Corp., Armonk, USA). Results: Answers to the "satisfaction with SBT" were generally negative, while answers to the "usefulness of SBT" were generally positive. There was no difference in satisfaction across gender and smart device ownership, whereas there were significant differences across years. With reference to the usefulness of SBT, students responded positively, while about the overall configuration and completeness of SBT, students responded negatively. Students also seemed to show a greater preference toward the pencil-paper test. Conclusion: On the other hand, students generally thought that SBT helped to assess medical knowledge better and was a more objective method of knowledge assessment than a pencil-paper test. We believe that students preferred the traditional paper-pencil test due to their unfamiliarity with SBT. We believe that an appropriate and careful remedy for drawbacks of the SBT will have a significant impact in the accumulation of actual clinical knowledge and in the improvement of practical skills for medical students.
... Driven by this movement, educators have begun to focus on medical and healthcare student engagement with the IT-based learning environment. Most research focuses on the effect of using IT in the objective physical environment and the impact of teaching methods on student engagement; such methods include classroom mobile technology [17], online learning aids [18], online tests [19], visualised virtual patients [20], audience response systems [21], simulation training [16,22], etc. Many existing studies, including those listed above, focus on the impact of IT on student engagement, but little is known about what and how student engagement itself has changed. ...
Article
Full-text available
Background Student engagement can predict successful learning outcomes and academic development. The expansion of simulation-based medical and healthcare education creates challenges for educators, as they must help students engage in a simulation-based learning environment. This research provides a reference for facilitators of simulation teaching and student learning in medical and health-related majors by providing a deep understanding of student engagement in a simulation-based learning environment. Methods We conducted semi-structured interviews with ten medical and healthcare students to explore their learning types and characteristics in a simulation-based learning environment. Thematic analysis was used to analyse the data. Results The interviews were thematically analysed to identify three types of student engagement in the simulation-based learning environment: reflective engagement, performance engagement, and interactive engagement. The analysis also identified eight sub-themes: active, persistent, and focused thinking engagement; self-directed-learning thinking engagement with the purpose of problem solving; active “voice” in class; strong emotional experience and disclosure; demonstration of professional leadership; interaction with realistic learning situations; support from teammates; and collegial facilitator-student interaction. Conclusions The student interview and thematic analysis methods can be used to study the richness of student engagement in simulation-based learning environments. This study finds that student engagement in a simulation-based learning environment is different from that in a traditional environment, as it places greater emphasis on performance engagement, which combines both thinking and physical engagement, as well as on interactive engagement as generated through interpersonal interactions. Therefore, we suggest expanding the learning space centring around “inquiry”, as it can help strengthen reflective communication and dialogue. It also facilitates imagination, stimulates empathy, and builds an interprofessional learning community. In this way, medical and healthcare students can learn through the two-way transmission of information and cultivate and reshape interpersonal relationships to improve engagement in a simulation-based learning environment.
... Driven by this movement, educators begin to focus on medical student engagement in the IT-based learning environment. Most research focuses on the effect of using IT in the objective physical environment and the impact of teaching methods on student engagement which include classroom mobile technology [16], online learning aids [17], online test [18], visualized virtual patient [19], audience response system [20], role-play-based simulation [21], etc. Many of the existing studies including those listed above focus on the impact of IT on student engagement, but little is known what and how student engagement itself has changed. ...
Preprint
Full-text available
Background Student engagement can predict successful learning outcomes and academic development. Expansion of simulation-based medical education will bring about challenges to educators and require them to help medical students to engage themselves in a simulation-based learning environment. Methods We conducted semi-structured interviews with ten medical students to explore their learning types and characteristics in the simulation-based learning environment. The thematic analysis was used to analyze the data. Results The interviews were thematically analyzed to form three types of student engagement in the simulation-based learning environment: reflective engagement, performance engagement, and interactive engagement. The analysis also identified eight sub-themes: active, persistent, and focused thinking engagement; self-directed-learning thinking engagement with the purpose of problem solving; active “voice” in class; strong emotional experience and disclosure; demonstration of professional leadership; interaction with realistic learning situations; support from teammates; and friendship-like facilitator-student interaction. Conclusions The findings explain the mechanisms behind student engagement in the simulation-based learning environment from two perspectives: the two-way construction of individuality and space in learning along with the interdependence of the learner and the learning community. That is, expanding the learning space centering around “inquiry” helps strengthen reflective communication and dialogue. It also facilitates imagination, stimulates empathy, and builds an inter-professional learning community. In this way, medical students are expected to learn from the two-way transmission of information, cultivate and reshape the interpersonal relationship, so as to improve engagement in the simulation-based learning environment.
... Driven by this movement, educators begin to focus on medical student engagement in the IT-based learning environment. Most research focuses on the effect of using IT in the objective physical environment and the impact of teaching methods on student engagement which include classroom mobile technology [16], online learning aids [17], online test [18], visualized virtual patient [19], audience response system [20], role-play-based simulation [21], etc. Many of the existing studies including those listed above focus on the impact of IT on student engagement, but they have yet to conduct in-depth research on what and how student engagement itself has changed. ...
Preprint
Full-text available
Background Student engagement can predict successful learning outcomes and academic development. Expansion of simulation-based medical education will bring about challenges to educators and require them to help medical students to engage themselves in a simulation-based learning environment. Methods We conducted semi-structured interviews with ten medical students to explore their learning types and characteristics in the simulation-based learning environment. The interpretative phenomenological methods were used to analyze the data. Results The interviews were thematically analyzed to form three types of student engagement in the simulation-based learning environment: reflective engagement, performance engagement, and interactive engagement. The analysis also identified eight sub-themes: active, persistent, and focused thinking engagement; problem-oriented thinking engagement; active “voice” in class; strong emotional experience and disclosure; demonstration of professional leadership; interaction with realistic learning situations; support from teammates; and friendship-like lecturer-student interaction. Conclusions The findings explain the mechanisms behind student engagement in the simulation-based learning environment from two perspectives: the two-way construction of individuality and space in learning along with the interdependence of the learner and the learning community. That is, expanding the learning space centering around “inquiry” helps strengthen reflective communication and dialogue. It also facilitates imagination, stimulates empathy, and builds an inter-professional learning community. In this way, medical students are expected to learn from the two-way transmission of information, cultivate and reshape the interpersonal relationship, so as to improve engagement in the simulation-based learning environment.
Article
Full-text available
U-Learning System is a powerful and easy handled techniques toll that provides an opportunity to all educationists to teach the entire subject in electronic process. In the context of Nepal, all Government school they are facing lots of problems for teaching and learning procedures. Because all the subjects who can be convert into a central server and all school can download the all subject's electronic lesson and can teach to students. It is a kind of distance learning method. So, all Government and Non-Government schools have prepared all courses lesson plan using U-Learning method. In the context of Nepal this project mainly focused on Government school, because all Government is very back in teaching and learning facilities.
Article
Full-text available
Clinical workplace-based learning is essential for undergraduate health professions, requiring adequate training and timely feedback. While the Mini-CEX is a well-known tool for workplace-based learning, its written paper assessment can be cumbersome in a clinical setting. We conducted a utility analysis to assess the effectiveness of an adapted Mini-CEX implemented as a mobile device WebApp for clinical practice assessment. We included 24 clinical teachers from 11 different clinical placements and 95 undergraduate physical therapy students. The adapted Mini-CEX was tailored to align with the learning outcomes of clinical practice requirements and made accessible through a WebApp for mobile devices. To ensure the validity of the content, we conducted a Delphi panel. Throughout the semester, the students were assessed four times while interacting with patients. We evaluated the utility of the adapted Mini-CEX based on validity, reliability, acceptability, cost, and educational impact. We performed factor analysis and assessed the psychometric properties of the adapted tool. Additionally, we conducted two focus groups and analyzed the themes from the discussions to explore acceptability and educational impact. The adapted Mini-CEX consisted of eight validated items. Our analysis revealed that the tool was unidimensional and exhibited acceptable reliability (0.78). The focus groups highlighted two main themes: improving learning assessment and the perceived impact on learning. Overall, the eight-item Mini-CEX WebApp proved to be a valid, acceptable, and reliable instrument for clinical practice assessment in workplace-based learning settings for undergraduate physiotherapy students. We anticipate that our adapted Mini-CEX WebApp can be easily implemented across various clinical courses and disciplines.
Article
Aim To develop a mobile-based multimedia Nursing Competency Evaluation (NCE) system based on the Attention, Relevance, Confidence, Satisfaction model and verify its effectiveness. Background In education, mobile devices can enable the delivery of learning content without time and spatial constraints. Mobile-based test is emerging as a novel method using technologies to appraise students’ performance on practicum. This mobile-based test go beyond the simple evaluation of memorised knowledge, a limitation common to paper-based tests. They are useful because they can include multimedia items such as videos, animations and pictures to comprehensively evaluate students’ clinical competencies. Methods This study was conducted in a nursing university in Seoul, South Korea in September 2021. A mixed method randomised controlled design was employed to evaluate its usability. The participants in the experimental group used the Nursing Competency Evaluation system and joined in focus group interviews for verifying the effects of the Nursing Competency Evaluation system qualitatively. Those in the control group responded to the mobile-based test, but which has only text-based test items. The system usability, effectiveness and learning satisfaction in both groups were measured after the mobile-based test experience. Quantitative and qualitative data were analysed using t-tests and thematic analysis using focus group interviews, respectively. Results Sixty nursing students participated, with 30 each in the experimental and control groups. There were no significant differences in nursing competency scores between the two groups. However, average scores for effectiveness and learning satisfaction were significantly higher in the experimental than in the control group. Nineteen experimental group participants partook in the interviews, with many describing that the Nursing Competency Evaluation system allowed them to experience new learning contents and efficiently learn practical nursing skills that can be useful in clinical settings. Conclusions The Nursing Competency Evaluation system is a promising method because it used mobile technologies and multimedia to appraise students’ performance on nursing practicum. It was found that the Nursing Competency Evaluation system with multimedia items is more realistic, interactive and satisfactory compared with text-based mobile test. Thus, we expect it to be used in future nursing curriculums to improve students’ nursing competencies.
Article
Full-text available
This study conducts the validity of the pen-and-paper and smart-device-based tests on optician’s examination. The developed questions for each media were based on the national optician’s simulation test. The subjects of this study were 60 students enrolled in E University. The data analysis was performed to verify the equivalence of the two evaluation methods, specifically, through split-plot factorial design of the evaluation method as a partition variable. As a result of the statistical significance test for the difference in achievement for each type of test information medium, indicating that there was no difference in achievement according to the type of test information medium at the significance level of .05. Although the validity of the smart device-based test and the paper-and-pencil test was verified through this study. To develop and set multimedia items in the optician national licensing examination, it is necessary to establish guidelines for how to develop the items.
Article
We aimed to develop and evaluate the effectiveness of a smart device-based test to assess Korean undergraduate students' clinical nursing competency, named SBT-NURS. The 65-item SBT-NURS comprises questions that simulate clinical situations, are problem solving-oriented, use multimedia (ie, videos/photos/animations), and involve the following topics: medical-surgical nursing, fundamentals of nursing, pediatrics, maternity, management, and psychiatric. We utilized a quantitative method to analyze the effects of the SBT-NURS (ie, via a single-group, post-experimental survey design) and a qualitative method to analyze students' experiences of using the SBT-NURS (ie, via seven focus group interviews [FGIs]). Students' overall adult health nursing paper-based test scores (ie, combining their scores in group activity, presentation, attendance, and attitude toward the midterm and final tests on adult health nursing) (r = 0.552, P < .001) and clinical practicum scores (r = 0.268, P = .040) in the last semester showed a statistically significant positive correlation with their SBT-NURS scores. Their paper-based testing practice average scores (ie, combination between paper-based tests and clinical practicum scores) showed a similar significant correlation (r = 0.506, P < .001). Students deemed the SBT-NURS advantageous, satisfactory, convenient, and useful. The SBT-NURS may be an effective learning and evaluation method for nursing education that help improve students' clinical competency and learning outcomes.
Article
Full-text available
In November 2011, the standing Committee of the Korean Medical Licensing Examination (KMLE) recommended that the National Health Personnel Licensing Examination Board introduce computerized testing to the KMLE. Therefore this article contextualizes and explores the possibility of applying computerized testing to the KMLE. Computerized testing comprises computer-based testing (CBT), ubiquitous-based testing (UBT), internet-based testing (IBT), and computerized adaptive testing (CAT). CBT refers to testing administered via a computer as the user interface, while testing with a smart phone or smart pad as the user interface is known as UBT. IBT is testing done online, and CAT is testing tailored so that each item provided fits the examinee's ability level. The benefits and drawbacks of each computerized testing option were surveyed. Among them, I propose CAT as the final goal for KMLE. In order to implement the computerized testing more effectively, it is recommended that items contain multimedia data and should involve interpretation or problem-solving. More evidence is needed to support the positive impact of computerized testing for undergraduate medical education and primary health care. Since the rapid progress of information technology such as internet bandwidth and human-computer interface methods, the introduction of computerized testing to KMLE will soon be plausible. It is possible to increase the quality of the KMLE with the introduction of computerized testing. Medical schools should prepare for the new testing environment of the KMLE by recruiting or training specialists in this field.
Article
Full-text available
Abstract Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the "focus group". Focus groups in this Guide are defined as "… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.
Article
Full-text available
This article describes the control-value theory of achievement emotions and its implications for educational research and practice. The theory provides an integrative framework for analyzing the antecedents and effects of emotions experienced in achievement and academic settings. It is based on the premise that appraisals of control and values are central to the arousal of achievement emotions, including activity-related emotions such as enjoyment, frustration, and boredom experienced at learning, as well as outcome emotions such as joy, hope, pride, anxiety, hopelessness, shame, and anger relating to success or failure. Corollaries of the theory pertain to the multiplicity and domain specificity of achievement emotions; to their more distal individual and social antecedents, their effects on engagement and achievement, and the reciprocal linkages between emotions, antecedents and effects; to the regulation and development of these emotions; and to their relative universality across genders and cultures. Implications addressed concern the conceptual integration of emotion, motivation, and cognition, and the need to advance mixed-method paradigms. In closing, implications for educational practice are discussed.
Article
Full-text available
During 2007, multimedia-based presentations of selected clinical findings were introduced into the United States Medical Licensing Examination. This study investigated the impact of presenting cardiac auscultation findings in multimedia versus text format on item characteristics. Content-matched versions of 43 Step 1 and 51 Step 2 Clinical Knowledge (CK) multiple-choice questions describing common pediatric and adult clinical presentations were administered in unscored sections of Step 1 and Step 2 CK. For multimedia versions, examinees used headphones to listen to the heart on a simulated chest while watching video showing associated chest and neck vein movements. Text versions described auscultation findings using standard medical terminology. Analyses of item responses for first-time examinees from U.S./Canadian and international medical schools indicated that multimedia items were significantly more difficult than matched text versions, were less discriminating, and required more testing time. Examinees can more readily interpret auscultation findings described in text using standard terminology than those same findings presented in a more authentic multimedia format. The impact on examinee performance and item characteristics is substantial.
Article
The Comprehensive Osteopathic Medical Licensing Examination conducted a trial of multimedia items in the 2008-2009 Level 3 testing cycle to determine (1) if multimedia items were able to test additional elements of medical knowledge and skills and (2) how to develop effective multimedia items. Forty-four content-matched multimedia and text multiple-choice items were randomly delivered to Level 3 candidates. Logistic regression and paired-samples t tests were used for pairwise and group-level comparisons, respectively. Nine pairs showed significant differences in either difficulty or/and discrimination. Content analysis found that, if text narrations were less direct, multimedia materials could make items easier. When textbook terminologies were replaced by multimedia presentations, multimedia items could become more difficult. Moreover, a multimedia item was found not uniformly difficult for candidates at different ability levels, possibly because multimedia and text items tested different elements of a same concept. Multimedia items may be capable of measuring some constructs different from what text items can measure. Effective multimedia items with reasonable psychometric properties can be intentionally developed.
Article
Context In education, tests are primarily used for assessment, thus permitting teachers to assess the efficacy of their curriculum and to assign grades. However, research in cognitive psychology has shown that tests can also directly affect learning by promoting better retention of information, a phenomenon known as the testing effect. Cognitive psychology research Cognitive psychology laboratory studies show that repeated testing of information produces superior retention relative to repeated study, especially when testing is spaced out over time. Tests that require effortful retrieval of information, such as short-answer tests, promote better retention than tests that require recognition, such as multiple-choice tests. The mnemonic benefits of testing are further enhanced by feedback, which helps students to correct errors and confirm correct answers. Application to medical education Medical educational research has focused extensively on assessment issues. Such assessment research permits the conclusion that clinical expertise is founded on a broad fund of knowledge and effective memory networks that allow easy access to that knowledge. Test-enhanced learning can potentially strengthen clinical knowledge that will lead to improved expertise. Conclusions Tests should be given often and spaced out in time to promote better retention of information. Questions that require effortful recall produce the greatest gains in memory. Feedback is crucial to learning from tests. Test-enhanced learning may be an effective tool for medical educators to use in promoting retention of clinical knowledge.
Article
No abstract available. (C) 1990 Association of American Medical Colleges