ArticlePDF Available

Laptop multitasking hinders classroom learning for both users and nearby peers

Authors:

Abstract and Figures

Laptops are commonplace in university classrooms. In light of cognitive psychology theory on costs associated with multitasking, we examined the effects of in-class laptop use on student learning in a simulated classroom. We found that participants who multitasked on a laptop during a lecture scored lower on a test compared to those who did not multitask, and participants who were in direct view of a multitasking peer scored lower on a test compared to those who were not. The results demonstrate that multitasking on a laptop poses a significant distraction to both users and fellow students and can be detrimental to comprehension of lecture content.
Content may be subject to copyright.
Laptop multitasking hinders classroom learning for both users and nearby peers
Faria Sana
a
, Tina Weston
b
,
c
, Nicholas J. Cepeda
b
,
c
,
*
a
McMaster University, Department of Psychology, Neuroscience, & Behaviour, 1280 Main Street West, Hamilton, ON L8S 4K1, Canada
b
York University, Department of Psychology, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
c
York University, LaMarsh Centre for Child and Youth Research, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
article info
Article history:
Received 11 September 2012
Received in revised form
5 October 2012
Accepted 12 October 2012
Keywords:
Laptops
Multitasking
Attentional control
Pedagogy
abstract
Laptops are commonplace in university classrooms. In light of cognitive psychology theory on costs
associated with multitasking, we examined the effects of in-class laptop use on student learning in
a simulated classroom. We found that participants who multitasked on a laptop during a lecture scored
lower on a test compared to those who did not multitask, and participants who were in direct view of
a multitasking peer scored lower on a test compared to those who were not. The results demonstrate that
multitasking on a laptop poses a signicant distraction to both users and fellow students and can be
detrimental to comprehension of lecture content.
Ó 2012 Elsevier Ltd. All rights reserved.
1. Introduction
Multitasking is ingrained in our daily lives. As you read this article, you may also be attending to a text message, sipping coffee, or writing
out a list of to-dos. Such a lifestyle is intended to increase efciency; however, there are limitations to how well multiple tasks can be carried
out concurrently (Posner, 1982). Multitasking places considerable demands on cognitive resources, which, in turn, degrades overall
performance, as well as performance on each task in isolation (Broadbent, 1958). The issue of multitasking and its consequences has become
a growing concern in education, as students are more commonly found engaged with their laptops or smartphones during class time. The
current study investigated the effect of laptop multitasking on both users and nearby peers in a classroom setting.
There is a host of theoretical and experimental research on divided attention and dual-task interference, terms that we consider
homologous to multitasking and therefore relevant to the current discussion. Research suggests that we have limited resources available to
attend to, process, encode, and store information for later retrieval (Posner, 1982). When focused on a single primary task, our attentional
resources are well directed and uninterrupted, and information is adequately processed, encoded, and stored (Naveh-Benjamin, Craik,
Perretta, & Tonev, 2000). When we add a secondary task, attention must be divided, and processing of incoming information becomes
fragmented. As a result, encoding is disrupted, and this reduces the quantity and quality of information that is stored (Pashler, 1994). When
we eventually retrieve information that was processed without interruptions, as a primary task, we are likely to experience minimal errors.
When we retrieve information that was processed via multitasking or with signicant interruptions from a secondary task, we are more
likely to experience some form of performance decrement (Wickens & Hollands, 2000).
Indeed, managing two or more tasks at one time requires a great deal of attention. Attentional resources are not innite (Konig, Buhner, &
Murling, 2005; Pashler, 1994). When the level of available attentional resources is less than what is required to complete two simultaneous
tasks, performance decrements are experienced since both tasks are competing for the same limited resources. This is especially true if both
tasks are competing for resources within the same sensory modality (Navon & Gopher, 1979; Wickens, 2002; Wickens & Hollands, 2000).
Limits to attentional resources means the quality (accuracy) and efciency (reaction time) at which multiple tasks are processed will be
compromised (Rubinstein, Meyer, & Evans, 2001). Numerous experimental studies have shown performance decrements under conditions
of multitasking or divided attention (e.g., Broadbent, 1958; Tulving & Thomson, 1973).
*
Corresponding author. York University, LaMarsh Centre for Child and Youth Research, 4700 Keele Street, Toronto, ON M3J 1P3, Canada. Tel.: þ1 416 736 2100x33266;
fax: 1 416 736 5814.
E-mail addresses: sanaf@mcmaster.ca (F. Sana), westont@yorku.ca (T. Weston), ncepeda@yorku.ca (N.J. Cepeda).
Contents lists available at SciVerse ScienceDirect
Computers & Educat ion
journal homepage: www.elsevier.com/locate/compedu
0360-1315/$ see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.compedu.2012.10.003
Computers & Education 62 (2013) 2431
Theoretical and empirical ndings on multitasking are especially signicant when considered in the context of student learning. In
classroom environments, students tend to switch back and forth between academic and non-academic tasks (Fried, 2008). This behavior
poses concerns for learning. The presumed primary tasks in many university classes are to listen to a lecture, consolidate information spoken
by the instructor and presented on information slides, take notes, and ask or respond to questions. On their own, these activities require
effort. If a secondary task is introduced, particularly one that is irrelevant to the learning context, attention must shift back and forth
between primary and secondary tasks, thereby taxing attentional resources. This multitasking can result in weaker encoding of primary
information into long-term memory (Bailey & Konstan, 2006; Ophira, Nass, & Wagner, 2009).
The personal computer provides a compelling source of classroom distraction and has become commonplace on university campuses.
Survey data estimates that 99% of incoming freshmen own a laptop (University of Virginia, 2009) and about 65% of students bring their laptop
to class (Fried, 2008). Research on educational laptop use addresses both the pros and cons of using this technology in the classroom. On the
one hand, laptops have been shown to assist learning through active approaches to teaching (Finn & Inman, 2004) and promotion of academic
success (Lindorth & Bergquist, 2010; Weaver & Nilson, 2005). When used for academic purposes such as taking notes and using software
programs (Driver, 2002), accessing supplemental resources and web-based activities (Debevec, Shih, & Kashyap, 2006), and viewing Power-
Point slides (McVay, Snyder, & Graetz, 2005), in-class laptop use can increase satisfaction, motivation, and engagement among students (Fried,
2008; Hyden, 2005; Weaver & Nilson, 2005). On the other hand, studies suggest that students who use laptops in class report low satisfaction
with their education, are more likely to multitask in class, and are more distracted (Wurst, Smarkola, & Gaffney, 2008). Student self-reports and
classroom observations suggest that laptops are being used for non-academic purposes, such as instant messaging and playing games (Barak,
Lipson, & Lerman, 2006; Driver, 2002), checking email and watching movies (Finn & Inman, 2004), and browsing the Internet (Bugeja, 2007).
Access to online entertainment makes it increasingly difcult for instructors to be more interesting than YouTube (Associated Press, 2010,
p. 10), especially if students arent intrinsically motivated by the subject materials. Moreover, time spent multitasking with these activities is
signicant; data from one study estimates that students multitask for approximately 42% of class time (Kraushaar & Novak, 2010).
Importantly, distractions from in-class multitasking correlate with decrements in learning. Students who multitask on laptops during
class time have impaired comprehension of course material and poorer overall course performance (Barak et al., 2006; Hembrooke & Gay,
2003; Kraushaar & Novak, 2010). In a recent study, Wood et al. (2012) measured the detriments of technology-based multitasking in
a classroom setting. This study is one of few in the eld that employed an experimental design (much of the literature is self-report). Students
were assigned either to a single multitasking condition (using Facebook, MSN, email, or cell phone texting), a control group (paper and pencil
notes-only or word processing notes-only), or a free-use-of-technology condition (where participants could choose to multitask or not
multitask on their laptop as much as they wished). Over the course of three class lectures, participants comprehension of the material was
assessed via quizzes. In general, paper and pencil control participants outperformed multitasking participants on the quiz assessments
(particularly MSN and Facebook users). However, as the authors admit, there were limitations to the methodology of this study. Most
noteworthy was that 43% of participants self-reported that they did not adhere to their assigned instructions across all three lectures. For
example, a participant assigned to the Facebook multitasking condition may have multitasked on Facebook and on MSN (i.e., two forms of
multitasking when they were instructed only to use one form), or chosen not to multitask on Facebook at all. Therefore, the experimental
manipulation was not successful, calling into question the validity of the quiz data. Although the authors corrected for this limitation in post-
hoc analyses, the results should be interpreted with caution as they are reliant on self-report and the sample size of each group was
signicantly reduced. Wood et al.s ndings are relevant but restricted in terms of the pedagogical recommendations that can be offered. One
goal of the present study was to replicate the ndings of Wood et al. using a more controlled design and more stringent delity measures.
Disrupting ones own learning is an individual choice; harming the learning of other students in the class is disrespectful. Laptop
distractions due to movement of images and laptop screen lighting (Melerdiercks, 2005) and multitasking activities (Crook & Barrowcliff,
2001) may cause involuntary shifts of attention among students in close proximity to laptop users (Barak et al., 2006; Chun & Wolfe,
2001; Finn & Inman, 2004). These studies suggest that students are annoyed and distracted by laptop use. However, to our knowledge,
no studies have directly measured the effects of distraction caused by laptop users on surrounding peers learning. Therefore, a second goal
of the present study was to examine the indirect effects of laptop multitasking on student learning.
2. Experiment 1
In Experiment 1, we investigated whether multitasking on a laptop would hinder learning as measured by performance on a compre-
hension test. All participants were asked to attend to a university-style lecture and take notes using their laptops as a primary task. Half the
participants, by random assignment, received additional instructions to complete a series of non-lecture-related online tasks at any
convenient point during the lecture. These tasks were considered secondary and were meant to mimic typical student web browsing during
class in terms of both quality and quantity. We hypothesized that participants who multitasked while attending to the lecture would have
lower comprehension scores compared to participants who did not multitask.
2.1. Method
2.1.1. Participants
Forty-four undergraduate students from a large comprehensive university in a large Canadian city participated in the study (25 females;
M age ¼ 18.9 years, SD ¼ 2.0). All participants were enrolled in an Introductory Psychology course and received course credit for partici-
pating in the experiment. Participants represented a variety of undergraduate disciplines (i.e., not only psychology). They were recruited
using an online portal designed for psychology research, which explained that the study involved listening to a class lecture and lling out
a few questionnaires. Only students who could bring a personal laptop to the experiment were invited to participate. Forty participants were
included in the nal data analysis, which included two experimental conditions: multitasking (n ¼ 20) and no multitasking (n ¼ 20). Of the
four participants removed from the analysis, two had previous knowledge of the lecture content (as measured by a screening questionnaire),
one performed below chance on the comprehension test, and one failed to follow instructions. The former two participants were removed
from the no multitasking condition, and the latter two participants were removed from the multitasking condition.
F. Sana et al. / Computers & Education 62 (2013) 2431 25
2.1.2. Materials
A 45-min PowerPoint lecture on meteorology was created by one of the experimenters (TW) in conjunction with her peer colleagues and
a faculty member with many years of teaching experience. The lecture was based on topics taken from an introductory meteorology
textbook (Ahrens, 1999; e.g., discriminating cloud types, pressure systems, thunderstorm development). A second faculty member with
expertise in Geology and Earth Sciences reviewed the lecture and approved the accuracy of its content, level of dif culty, and consistency
with materials presented in related undergraduate classes (S. Carey, personal communication, August 16, 2012). The same experimenter
(TW), an upper-level graduate student with lecturing experience, acted as the professor and presented the lecture live to the class. At the
time of data collection, TW had practiced and given this lecture over a dozen times for an independent study. She followed a memorized
script.
For the multitasking condition, a set of 12 online tasks was created. An example of a task was What is on Channel 3 tonight at 10 pm?
These online tasks were meant to mimic typical student browsing during class in terms of both quality (i.e., visiting websites of interest to
a young adult sample, such as Google, YouTube, and Facebook) and quantity [w40% of class time, as suggested by Kraushaar and Novak
(2010)]. A pilot study (n ¼ 5), conrmed that completion of the tasks was not overwhelming; tasks could be completed in w15 min (or
33% of the lecture time). Although we do not know if the participants of our study multitask more or less than 33% during lectures in a real-
world setting, the online tasks were within the critical range of previously reported time spent on multitasking in real-world classrooms
(w40%), and therefore unlikely to articially increase the costs of multitasking.
The primary measure of learning was a four-option multiple-choice comprehension test with 20 questions evaluating simple knowledge
(i.e., basic retention of facts from the lecture) and 20 questions evaluating application of knowledge (i.e., applying a concept from the lecture
to solve a novel problem). Question type was included as variable to examine whether multitasking outcomes might differ depending on the
difculty level of the material being tested. There is some evidence to suggest that multitasking may be particularly detrimental to complex
knowledge (e.g., Foerde, Knowlton, & Poldrack, 2006). The ordering of the questions on the test (simple vs. complex) was intermixed.
Participants also completed a brief questionnaire that collected demographic data (e.g., age, gender, and uency in English) and screened
participants for prior familiarity with the lecture content and general interest in the lecture presentation. Additionally, there were two
questions, both listed on a 7-point Likert scale, directed toward participants in the multitasking condition: (1) To what extent do you think
the act of multitasking hindered your learning of the lecture material? (1 ¼ did not hinder my learning; 4 ¼ somewhat hindered my
learning; 7 ¼ denitely hindered my learning) and (2) To what extent do you think your multitasking hindered the learning experience of
other students? (1 ¼ did not hinder others learning; 4 ¼ somewhat hindered others learning; 7 ¼ denitely hindered others learning).
Responses to these questions allowed us to measure subjective student views on multitasking outcomes.
2.1.3. Design and procedure
All participants were asked to bring their personal laptops to the experiment. They received an instruction sheet and a consent form. The
instruction sheet asked participants to attend to the lecture and use their personal laptop to take notes on the information being presented,
just as they might normally do in class. In addition to taking lecture notes, half of the participants (randomly selected) were instructed to
complete the 12 online tasks at some point during the lecture.
The experiment was conducted in a classroom with four rows of tables, each with ve chairs. Therefore, since there was a maximum of 20
seats, we repeated the experiment three times to obtain a total sample of 44 participants [each repeat included roughly the same number of
total participants (range: 1415), and an equal divide of participants within the two experimental conditions]. Participants faced a projector
screen at the front of the classroom. Instruction sheets were randomly placed at each seat. Thus, seat location of participants in each
condition was fully random. Participants were randomly presented with a seat number as they entered the classroom and were asked to
settle in and read the instruction sheet and consent form at their assigned seat. While all participants were instructed to take notes on their
laptops during the lecture, some were also required to complete the series of online tasks. An experimenter (FS) remained at the back of the
classroom during the lecture presentation and used a seating map to track participants seat location, monitor participants
screen activities,
and ensure that all instructions were being followed. At the end of the lecture, participants were asked to email their notes and (if
applicable) their responses to the online tasks to the experimenters and, nally, to put away their laptops. The comprehension test
immediately followed and a 30 min time limit was enforced (as time limits are realistic of typical university examinations). All participants
completed the test within the time limit. Once the experimenters collected all the tests, participants responded to the questionnaire, were
debriefed and dismissed.
2.1.4. Fidelity measures
FS closely monitored participants activities throughout the lecture presentation, observing each participants activities at least once
every 34 min interval. This was done to ensure that all participants adhered to their assigned instructions. If a participant was not on task
(e.g., a non-multitasker engaging in multitasking activities, a multitasker browsing on a website irrelevant to the online tasks, or a multi-
tasker completely ignoring the online tasks), they were probed once and reminded of their specic instructions. If a participant was probed
more than 2 times, their data were discarded from the nal analysis (n ¼ 1).
Participants notesand online task answers were analyzed for completion and quality. In terms of the online tasks, all multitaskers attempted
to answer at least some of the tasks. On average, multitaskers successfully completed 9 out of the 12 tasks (M ¼ 0.75; SD ¼ 0.25). In terms of
participants notes, all participants took some form of notes on the lecture content. Notes were scored for quality (15) by the experimenter
most familiar with the material (TW). She was blind to participants condition while scoring. A score of 1 meant the participant attempted to
copy the lecture slides verbatim, but the notes were disorganized and/or missing information. A score of 3 meant the participant copied the
lecture slides verbatim, but did not include additional information presented verbally by the lecturer. A score of 5 meant the participant copied
all slide information and included all information presented verbally by the lecturer. Analysis of the quality scores revealed that multitaskers
notes (M ¼ 2.7, SD ¼ 1.2) were of a poorer quality than non-multitaskers notes (M ¼ 4.1, SD ¼ 1.0), t(34) ¼ 3.6, p ¼ .001,
u
2
¼ .23.
Therefore, our delity measures (i.e., participant monitoring, discarded data, analysis of notes and online tasks les) clearly show that
participants stayed on task throughout the experiment. It is evident that multitasking played a role in impairing participants note-taking
ability.
F. Sana et al. / Computers & Education 62 (2013) 243126
2.1.5. Results and discussion
There were no demographic differences between participants of the two conditions in terms of age, gender, uency in English, or high
school GPA. To examine potential differences between conditions on the comprehension test, a 2 (condition: multitasking, no multi-
tasking) 2 (question type: simple, complex) mixed factorial ANOVA was conducted with condition as a between-subjects factor and
question type as a within-subjects factor. The main effect of condition was signicant, F(1,38) ¼ 10.2, p ¼ .003,
u
2
¼ .20. Participants who
multitasked during the lecture (M ¼ 0.55, SD ¼ 0.11, n ¼ 20) scored signicantly lower than participants who did not multitask (M ¼ 0.66,
SD ¼ 0.12, n ¼ 20). The main effect of question type was also signicant, F(1,38) ¼ 17.7, p < .001,
u
2
¼ .30. Participants scored higher on simple
factual questions (M ¼ 0.60, SD ¼ 0.13, n ¼ 20) than on complex apply-your-knowledge questions (M ¼ 0.56, SD ¼ 0.13, n ¼ 20). This main
effect simply reects the difculty of the questions created. The interaction was not signicant, F(1,38) ¼ 0.79, p ¼ .380. These ndings
demonstrate a strong, detrimental effect of multitasking on comprehension scores. Overall, participants who multitasked scored 11% lower
on a post-lecture comprehension test (Fig. 1).
3. Experiment 2
In Experiment 2, we investigated whether being in direct view of a multitasking peer would negatively inuence learning as measured
by performance on a comprehension test. A new group of participants was asked to take notes using paper and pencil while attending to the
lecture. Some participants were strategically seated throughout the classroom so that they were in view of multitasking confederates on
laptops, while others had a distraction-free view of the lecture. Confederates mimicked multitaskers from Experiment 1 by typing notes on
the lecture and performing other concurrent, irrelevant online tasks. We hypothesized that participants who were seated in view of
multitasking peers would have lower comprehension scores compared to participants who had minimal or no visual distraction from
multitasking peers.
3.1. Method
3.1.1. Participants
Thirty-nine undergraduate students from the same university participated in the study (26 females; M age ¼ 20.3 years, SD ¼ 4.2). None
had participated in Experiment 1. Recruitment procedures and participant incentives were the same as in Experiment 1. Thirty-eight
participants were included in the nal data analysis, which included two experimental conditions: in view of multitasking peers (n ¼ 19)
and not in view of multitasking peers (n ¼ 19). The one participant excluded from the analysis was removed because of prior familiar with the
lecture content (as measured by a screening questionnaire). Thirty-six undergraduate students were recruited to be confederates.
3.1.2. Materials
The same materials were used as in Experiment 1 with the exception of two questions on the questionnaire. Instead of asking about
whether or not multitasking hindered self and peer learning, the two questions in this experiment were directed toward participants in
view of technology: (1) To what extent were you distracted by other students laptop use around you? (1 ¼ not distracted at all;
4 ¼ somewhat distracted; 7 ¼ very distracted) and (2) To what extent do you think being in view of other students laptop use hindered your
learning of the lecture material? (1 ¼ did not hinder my learning; 4 ¼ somewhat hindered my learning; 7 ¼ denitely hindered my
learning). Responses to these questions provided us with subjective student measures on whether or not their multitasking peers were
a distraction, and whether they perceived this distraction to be a barrier to learning.
3.1.3. Design and procedure
As in Experiment 1, participants were asked to bring their personal laptops to the experiment; however, only those assigned as
confederates actually used their laptops (n ¼ 36). The experimental participants (n ¼ 38) were instructed to keep their laptops in their
knapsacks.
Fig. 1. Proportion correct on the comprehension test as a function of condition (multitasking vs. no multitasking). Multitasking lowered test performance by 11%, p < .01. Error bars
represent standard error of the mean.
F. Sana et al. / Computers & Education 62 (2013) 2431 27
All participants received an instruction sheet and a consent form. The confederates instruction sheet explained that they were
confederates, and they were required to use their laptops to ip between browsing the Internet (e.g., email, Facebook) and pretending to
take notes on the lecture content as the lecture was presented. In fact, they were told they were not required to pay attention to the lecture.
The participants instruction sheet asked them to keep their laptops stored, and to use the paper and pencil provided by the experimenters
to take written notes on the lecture content, just as they might normally do in class.
The room set-up was the same as in Experiment 1. Again, since there was a maximum of 20 seats, we repeated the experiment four times
to obtain a total sample of 39 experimental participants and 36 confederates [each repeat included roughly the same number of total
participants (range: 1820), with more confederates in the classroom than participants (roughly a 2:1 ratio)]. Instruction sheets and consent
forms were strategically placed at each seat. Participants were randomly presented with a seat number as they entered the classroom. Some
participants were seated so that they were behind two multitasking confederates (i.e., they were in view of one laptop user in their left
visual eld and another laptop user in their right visual eld; Fig. 2). These participants were considered in view of multitasking peers. Other
participants were seated behind participants who, like themselves, were asked to take written notes on the lecture. These participants were
considered not in view of multitasking peers (Fig. 2).
The lecture was presented as in Experiment 1. While the lecture was being presented, an experimenter (FS) remained at the back of the
classroom and used a seating map to track participants and confederates seat locations, monitor laptop screen activities and note-taking,
and ensure that all instructions were being followed. When the lecture ended, written notes were collected from the participants, and
confederates were asked to nish up their work and store their laptops. At this point, confederates were asked to leave the room. They were
debriefed by one of the experimenters and dismissed. The remaining participants were given the comprehension test and a 30 min time
limit was enforced, with all participants successfully completing the test within the time limit. To maintain motivation levels, participants
were told a cover story that those students who had left the classroom were going to return one day later, at which time they would
complete a delayed comprehension test. Once the experimenters collected all the tests, participants responded to the questionnaire, were
debriefed and dismissed.
3.1.4. Fidelity measures
FS closely monitored participants activities throughout the lecture presentation, as in Experiment 1. This was done to ensure that all
participants adhered to their assigned instructions. If a participant or confederate was not on task (e.g., a confederate not using their laptop,
Fig. 2. Visual representation of participants who were and were not in view of a multitasking peer. In view participants were strategically seated behind two confederates, with one
confederates laptop screen w45
to the participants right and the others w45
to the participants left. Not in view participants were seated similarly behind two experimental
subjects who took handwritten notes.
F. Sana et al. / Computers & Education 62 (2013) 243128
or a participant not taking any notes), they were probed and reminded of their specic instructions. All confederates and participants
complied with their instructions.
Participants notes were scored for quality using the same scale reported in Experiment 1. All participants took some form of notes on the
lecture content. The notes of participants not in view of multitasking peers (M ¼ 3.6, SD ¼ 1.3) were similar in quality to the notes of
participants in view of multitasking peers (M ¼ 3.7, SD ¼ 1.2), t < 1.
Therefore, our delity measures (i.e., participant monitoring, analysis of notes) clearly show that participants stayed on-task throughout
the experiment and took comprehensive notes. Being in view of multitasking peers did not reduce note quality.
3.1.5. Results and discussion
There were no demographic differences between participants of the two conditions in terms of age, gender, uency in English, or high
school GPA. To examine potential differences between conditions on the comprehension test, a 2 (condition: in view of multitasking, not in
view of multitasking) 2 (question type: simple, complex) mixed factorial ANOVA was conducted with condition as a between-subjects
factor and question type as a within-subjects factor. The main effect of condition was signicant, F(1,36) ¼ 21.5, p < .001,
u
2
¼ .36.
Participants in view of multitasking peers scored signicantly lower on the test (M ¼ 0.56, SD ¼ 0.12, n ¼ 19) than participants not in view of
multitasking peers (M ¼ 0.73, SD ¼ 0.12, n ¼ 19). The main effect of question type was also signicant, F(1,36) ¼ 11.3, p ¼ .002,
u
2
¼ .21.
Participants scored higher on simple questions (M ¼ 0.69, SD ¼ 0.14, n ¼ 20) than on complex questions (M ¼ 0.60, SD ¼ 0.15, n ¼ 20). The
interaction was not signicant, F(1,36) ¼ 0.91, p ¼ .347. These ndings suggest that peer multitasking distracted participants who were
attempting to pay sole attention to the lecture. Those in view of a multitasking peer scored 17% lower on a post-lecture comprehension test
(Fig. 3).
4. General discussion
Our experiments replicate one important nding and introduce a new nding. First, participants comprehension was impaired when
they performed multiple tasks during learning, one being the primary task of attending to the lecture material and taking notes, and the
other being the secondary task of completing unrelated online tasks. This result is not surprising and is consistent with those reported by
other studies (e.g., Barak et al., 2006; Hembrooke & Gay, 2003; Kraushaar & Novak, 2010; Wood et al., 2012), but we conrm it using a more
controlled procedure. Second, comprehension was impaired for participants who were seated in view of peers engaged in multitasking. This
nding suggests that despite actively trying to learn the material (as evidenced by comprehensive notes, similar in quality to those with
a clear view of the lecture), these participants were placed at a disadvantage by the choices of their peers.
Our experiments were applied in nature and, as a result, do not make major contributions to multitasking or attention theory. However,
the results are consistent with theory, namely that the degree of attention that is allotted to a task is directly related to the quality and
quantity of information processed. Although we did not directly measure attention, all participants were actively listening to and taking
notes on the information presented. We had strong delity measures that allowed us to be more certain (compared to previous studies) that
factors unrelated to our manipulation were not adding other sources of variance to the data. Thus, we can speculate that attention was
impaired due to our manipulation, either in the form of self-multitasking or being in view of a multitasking peer.
In Experiment 1, participants were listening to the lecture, taking notes, and completing the online tasks. This exercise of carrying out
multiple tasks at the same time seemed to have impeded retrieval of information at the test, likely as a result of poor encoding during
learning (as evidenced by multitaskers poorer quality of notes) and inefciency at allocating limited attentional resources. In Experiment 2,
participants were listening to the information being presented and taking notes while in the presence of distracting activity in their
peripheral vision. Confederates laptop screens may have distracted participants from directing their full attention to the lecture. Partici-
pants were still able to take notes on the lecture; however, a lack of complete attentional focus may have compromised the elaboration and
processing of the information being written, thereby lowering successful retrieval attempts during the comprehension test. Future
experiments should aim to bridge the gap between cognitive principles of memory and attention and applied pedagogical research to
denitively and directly test these theoretical claims surrounding multitasking and attention. More stringent methods could provide further
experimental evidence to test these hypotheses. For example, one could use eye-tracking methodologies to determine when, where, and for
Fig. 3. Proportion correct on the comprehension test as a function of condition (view to multitasking vs. no view to multitasking). Being in view of multitasking peers lowered test
performance by 17%, p < .001. Error bars represent standard error of the mean.
F. Sana et al. / Computers & Education 62 (2013) 2431 29
how long a students attention is diverted, and whether looking-away-from-lecture time correlates with test performance specic to the
information missed by the student.
Results from the questionnaires add to the discussion of technologys impact on learning. Responses from Experiment 1 show that
participants in the multitasking condition were aware that multitasking during the lecture would somewhat hinder their learning
(M ¼ 5.5, SD ¼ 2.0). However, they estimated peers learning would be barely hindered (M ¼ 3.3, SD ¼ 1.9). By contrast, the observed effect
size from peer distraction (Experiment 2) was nearly twice as large as the observed self-distraction effect size (Experiment 1). Multitaskers
appear to have been able to time their multitasking activities in a manner that reduced distraction to some degree. Those in view of
multitasking appear to have been lured into watching other students laptop screens even during inopportune moments of the lecture, thus
creating worse learning for those in view of a multitasker compared to the actual person who was multitasking. These conclusions should be
interpreted with caution and deserve follow up in future studies. Our conclusions are based solely on effect size; we cannot directly compare
test performance across experiments due to methodological variation.
Questionnaire responses from Experiment 2 suggest that participants reported being somewhat distracted by nearby confederates
(M ¼ 3.3, SD ¼ 2.1), and that being in view of a multitasking peer barely hindered their own learning (M ¼ 2.7; SD ¼ 1.6). Thus, overall, the
questionnaire ratings suggest students are not in touch with the indirect consequences of their peers actions.
Despite literature suggesting that multitasking may be particularly detrimental to the learning of complex knowledge (e.g., Foerde et al.,
2006), our results show that multitasking impaired both simple factual learning and complex application learning to the same degree.
Therefore, even the learning of a new fact (e.g., Which cloud type is found highest in the atmosphere?) can be interrupted by self-
multitasking or distraction from peers who are multitasking.
Relatedly, multitasking may have different overall effects depending on the difculty of the tasks being juggled. Some studies suggest
that if a primary task is more difcult or novel, it will inherently require a greater degree of attentional resources to perform the task at
a satisfactory level (Kahneman, 1973; Posner & Boies, 1971; Styles, 2006). Therefore, the primary task may only be performed well if no other
tasks must be completed at the same time, or if any secondary task is relatively simple or automatic (i.e., if the secondary task does not
require many attentional resources; Kahneman & Treisman, 1984). This latter case was the scenario of our Experiment 1. Participants were
asked to learn something novel in a primary task (where many attentional resources were required), while simultaneously attending to
a simple secondary task (where attentional resources were still required, albeit not to the same extent). We designed the difculty level of
the primary and secondary tasks to mimic what has been reported as typical classroom behaviors (i.e., students who switch back and forth
between attending to a classroom lecture and checking e-mail, Facebook, and IMing with friends). Our results suggest that even though the
secondary task was rather mindless for an undergraduate student (i.e., casual Internet browsing) it still had an impact on the performance of
the primary task, as evidenced by multitaskers
lowered test scores. Future studies could further examine the impact of multitasking in the
classroom by manipulating the level of difculty of the primary and/or secondary tasks beyond the manipulations of the current design.
According to dual task theories (e.g., Pashler, 1994), one would expect to see greater decits in learning performance as the difculty level of
either primary or secondary tasks increases (e.g., a student who attends their physics lecture, but chooses to spend most of the class time
studying for a history exam taking place during the next period).
In light of the evidence reported in this study, what might we recommend to educators as a means of managing laptop use in the
classroom? A ban on laptops is extreme and unwarranted. It cannot be overlooked that laptops foster positive learning outcomes when used
appropriately (e.g., web-based research, pop quizzes, online case studies, and discussion threads; e.g., Finn & Inman, 2004). When laptops
are used strictly for note-taking purposes, typed notes have been shown to have similar positive inuences on learning compared to written
notes (Quade, 1996). Our results conrm this nding through a rudimentary cross-experiment comparison; that is, we saw no striking
differences between participants in the no multitasking condition of Experiment 1 (participants who typed notes) and participants in the no
view of technology condition of Experiment 2 (participants who wrote notes) in terms of quality of notes (M ¼ 4.1 and M ¼ 3.6, respectively)
as well as subsequent comprehension test scores (M ¼ 0.66 and M ¼ 0.73, respectively, keeping in mind that non-multitaskers in Exper-
iment 1 sometimes were in view of multitaskers, which could explain their qualitatively lower comprehension test score). Thus, for a variety
of reasons, laptops should remain a tool of the modern classroom, perhaps with some sensible constraints.
One suggestion is for teachers to discuss the consequences of laptop use with their students at the outset of a course (Gasser &
Palfre y, 2009). Teachers are in a position to inform students about negative educational outcomes of laptop m isuse, as well as to
compare and contrast their views wi th the views of their s tudents. In this discussion, the class could collectively come up with a few
rules of technology e tiquette that are enfo rced in the classroom throughout the semester (e.g., sit at the back of the classroom if you
plan to multitask, so at least other students are not bothered; McCrea r y, 200 9). In this way, the iss ue of technology and distraction is
highlighted and students can make informed choices, rather than assuming they (and their peers) are immune to multitasking
decits.
Another suggestion is to explicitly discourage laptop use in courses where technology is not necessary for learning. One could argue that
courses where information is generally presented in textbooks and on lecture slides do not require a laptop to the same extent as courses
where hands-on learning is an integrated component of the course, likely in the form of specialized computer software. This recommen-
dation is made with caution as some students might not benet from a course without laptops. For example, students with disabilities often
rely on computer technology to assist in learning (Fichten et al., 2001). Therefore, perhaps one could allow laptops in all courses but restrict
the use of the Internet to course-based websites only (if possible).
Ultimately students must take accountability for their own learning; however, enthusiastic instructors can inuence how students
choose to direct their attention during class time. A third suggestion is to provide educators with resources to help them create enriching,
informative, and interactive classes that can compete with the allure of non-course websites, so that students are deterred from misusing
their laptop in the rst place. This could include incorporating the laptop into real-time classroom exercises. For example, instructors could
ask their students to search the Internet for missing lecture information, or to nd an interesting online video to share with the class.
Furthermore, instructors could use a shared website where students are able to rank the difculty level of lecture concepts, thereby allowing
the instructor to gauge student comprehension levels in class. The instructor could then review these concepts and provide feedback to
students prior to the end of the class. Indeed, inventive instructors can shape how students choose to use their laptops during class time, so
that laptop use is constructive.
F. Sana et al. / Computers & Education 62 (2013) 243130
In order to effectively integrate technology into classrooms, we must continue to examine the consequencesdboth positive and neg-
ativedof technology use on learning. While the present research examined only foundational learning from a lecture (i.e., immediate
learning), future research could examine the effects of multitasking on longer-term retention, and could investigate subject material
differences. Cognitive theories of divided attention and dual-task performance can help us understand the nature of how we learn and what
distracts us. Applied research, using randomized experimental designs, will allow us to examine ways in which on-task activities during
learning can be maximized and distraction minimized. We must ask ourselves: Under what conditions do the benets of laptop use
outweigh the detriments? Ultimately, engaging instructors and dedicated learners will need to work hard and stay focused to keep
classroom learning at an optimal level.
Acknowledgments
This research was supported in part by a grant from the York University Faculty of Health. We thank Irina Kapler for helping to create the
lecture and comprehension test materials.
References
Ahrens, D. C. (1999). Meteorology today: An introduction to weather, climate and the environment. California: Thompson Higher Education.
Associated Press. (2010). At universities, is better learning a click away? Education Week, 29,10.
Bailey, B. A., & Konstan, J. A. (2006). On the need for attention-aware systems: measuring effects of interruption on task performance, error rate, and affective state. Computers
in Human Behavior, 22, 685708. http://dx.doi.org/10.1016/j.chb.2005.12.009.
Barak, M., Lipson, A., & Lerman, S. (2006). Wireless laptops as means for promoting active learning in large lecture halls. Journal of Research on Technology in Education, 38,
245263.
Broadbent, D. (1958). Perception and communication. Oxford: Pergamon.
Bugeja, M. J. (2007). Distractions in the wireless classroom. The Chronicle of Higher Education, 53,C1C5, Retrieved from. http://www.chronicle.com.
Chun, M. M., & Wolfe, J. (2001). Visual attention. In E. B. Goldstein (Ed.), Blackwell handbook of perception (pp. 272310). Oxford: Blackwell Publishers Ltd.
Crook, C., & Barrowcliff, D. (2001). Ubiquitous computing on campus: patterns of engagement by university students. International Journal of HumanComputer Interaction, 13,
245258. http://dx.doi.org/10.1207/S15327590IJHC1302_9.
Debevec, K., Shih, M., & Kashyap, V. (2006). Learning strategies and performance in a technology-integrated classroom. Journal of Research on Technology in Education,
38, 293307.
Driver, M. (2002). Exploring student perceptions of group interactions and class satisfaction in the web-enhanced classroom. The Internet & Higher Education, 5,3545. http://
dx.doi.org/10.1016/S1096-7516(01)00076-8.
Fichten, C. S., Asuncion, J., Barile, M., Généreux, C., Fossey, M., Judd, D., et al. (2001). Technology integration for students with disabilities: empirically based recommendations
for faculty. Educational Research and Evaluation, 7,185221. http://dx.doi.org/10.1076/edre.7.2.185.3869.
Finn, S., & Inman, J. G. (2004). Digital unity and digital divide: surveying alumni to study effects of a campus laptop initiative. Journal of Research on Technology in Education, 36,
29731 7 .
Foerde, K., Knowlton, B. J., & Poldrack, R. A. (2006). Modulation of competing memory systems by distraction. Proceedings of the National Academy of Sciences, 103, 11778
11783. http://dx.doi.org/10.1073/pnas.0602659103.
Fried, C. B. (2008). In-class laptop use and its effects on student learning. Computers & Education, 50, 906914. http://dx.doi.org/10.1016/j.compedu.2006.09.006.
Gasser, U., & Palfrey, J. (2009). Mastering multitasking. Educational Leadership, 66,1419.
Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: the effects of multitasking in learning environments. Journal of Computing in Higher Education, 15,4664. http://
dx.doi.org/10.1007/BF02940852.
Hyden, P. (2005). Teaching statistics by taking advantage of the laptops ubiquity. New Directions for Teaching and Learning, 101,3742. http://dx.doi.org/10.1002/tl.184.
Kahneman, D. (1973). Attention and effort. New Jersey: Prentice-Hall.
Kahneman, D., & Treisman, A. (1984). Changing views of attention and automaticity. In R. Parasuraman, D. R. Davies, & J. Beatty (Eds.), Variants of attention (pp. 29
61). New
York: Academic Press.
Konig, C. J., Buhner, M., & Murling, F. (2005). Working memory, uid intelligence, and attention are predictors of multitasking performance, but polychronicity and extra-
version are not. Human Performance, 18, 243266. http://dx.doi.org/10.1207/s15327043hup1803_3.
Kraushaar, J. M., & Novak, D. C. (2010). Examining the effects of student multitasking with laptops during the lecture. Journal of Information Systems Education, 21,241251.
Lindorth, T., & Bergquist, M. (2010). Laptopers in an educational practice: promoting the personal learning situation. Computers & Education, 54,311320. http://dx.doi.org/
10.1016/j.compedu.2009.07.014.
McCreary, J. R. (2009). The laptop-free zone. Valparaiso University Law Review, 43,187, Retrieved from. http://ssrn.com/abstract¼1280929.
McVay, G. J., Snyder, K. D., & Graetz, K. A. (2005). Evolution of a laptop university: a case study. British Journal of Educational Technology, 36,513524. http://dx.doi.org/10.1111/
j.1467-8535.2005.00487.x.
Melerdiercks, K. (2005). The dark side of the laptop university. Journal of Information Ethics, 14,911. http://dx.doi.org/10.3172/JIE.14.1.9.
Naveh-Benjamin, M., Craik, F. I. M., Perretta, J. G., & Tonev, S. T. (2000). The effects of divided attention on encoding and retrieval processes: the resiliency of retrieval
processes. Quarterly Journal of Experimental Psychology, 53A, 609625. http://dx.doi.org/10.1080/713755914.
Navon, D., & Gopher, D. (1979). On the economy of the human processing systems. Psychological Review, 86,214255. http://dx.doi.org/10.1037/0033-295X.86.3.214.
Ophira, E., Nass, C., & Wagner, D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences, 106, 1558315587. http://dx.doi.org/10.1073/
pnas.0903620106.
Pashler, H. (1994). Dual-task interference in simple tasks: data and theory. Psychological Bulletin, 116, 220244. http://dx.doi.org/10.1037/0033-2909.116.2.220.
Posner, M. (1982). Cumulative development of attentional theory. American Psychologist, 37,168179. http://dx.doi.org/10.1037/0003-066X.37.2.168.
Posner, M. I., & Boies, S. J. (1971). Components of attention. Psychological Review, 78,391408. http://dx.doi.org/10.1037/h0031333.
Quade, A. M. (1996). An assessment of retention and depth of processing associated with notetaking using traditional paper and pencil and on-line notepad during computer-
delivered instruction. In Proceedings of the annual national convention of the association for educational communications and technology.
Rubinstein, J. S., Meyer, D. E., & Evans, J. E. (2001). Executive control of cognitive processes in task switching. Journal of Experimental Psychology: Human Perception and
Performance, 27,763797. http://dx.doi.org/10.1037/0096-1523.27.4.763.
Styles, E. A. (20 06). The psychology of attention (2nd ed.). England: Psychology Press.
Tulving, E., & Thomson, D. M. (1973). Encoding specicity and retrieval processes in episodic memory. Psychological Review, 50, 352373. http://dx.doi.org/10.1037/h0020071.
University of Virginia. (2009). UVa rst year student computer inventory. Retrieved from.
http://itc.virginia.edu/students/inventory/2009/.
Weaver, B. E., & Nilson, L. B. (2005). Laptops in class: what are they good for? What can you do with them? New Directions for Teaching and Learning, 101,313. http://
dx.doi.org/10.1002/tl.181.
Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomic Science, 3(2), 159177. http://dx.doi.org/10.1080/14639220210123806.
Wickens, C. D., & Hollands, J. G. (2000). Engineering psychology and human performance (3rd ed.). New Jersey: Prentice Hall.
Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on real-time classroom
learning. Computers & Education, 58, 365374. http://dx.doi.org/10.1016/j.compedu.2011.08.029.
Wurst, C., Smarkola, C., & Gaffney, M. A. (2008). Ubiquitous laptop usage in higher education: effects on student achievement, student satisfaction, and constructivist
measures in honors and traditional classrooms. Computers & Education, 51,17661783. http://dx.doi.org/10.1016/j.compedu.2008.05.006.
F. Sana et al. / Computers & Education 62 (2013) 2431 31
... For example, although students bring laptops to the classroom for academic tasks such as taking notes [22], they also use laptops for non-academic tasks like browsing the internet [23], playing games [24], and checking email [25]. They switch back and forth between academic and nonacademic tasks [26], and such multitasking hinders learning in the classroom environment. A 2020 study showed that students who use laptops in the classroom engage in more multitasking, which causes negative effects on their ability to remember the course contents [27]. ...
... A 2020 study showed that students who use laptops in the classroom engage in more multitasking, which causes negative effects on their ability to remember the course contents [27]. Furthermore, multitasking on a laptop can distract nearby students who are in direct view of the laptop [26]. It is critical to understand the student's behavior in large lectures because multitasking negatively affects GPA, efficiency, self-regulation, recall, test performance, and reading comprehension [28]. ...
Article
Full-text available
The increase of instructional technology, e-learning resources, and online courses has created opportunities for data mining and learning analytics in the pedagogical domain. A large amount of data is obtained from this domain that can be analyzed and interpreted so that educators can understand students’ attention. In a classroom where students have their own computers in front of them, it is important for instructors to understand whether students are paying attention. We collected on- and off-task data to analyze the attention behaviors of students. Educational data mining extracts hidden information from educational records, and we are using it to classify student attention patterns. A hybrid method is used to combine various techniques like classifications, regressions, or feature extraction. In our work, we combined two feature extraction techniques: principal component analysis and linear discriminant analysis. Extracted features are used by a linear and kernel support vector machine (SVM) to classify attention patterns. Classification results are compared with linear and kernel SVM. Our hybrid method achieved the best results in terms of accuracy, precision, recall, F1, and kappa. Also, we correlated attention with learning. Here, learning corresponds to tests and a final course grade. For determining the correlation between grades and attention, Pearson’s correlation coefficient and p-value were used.
... As per several studies students who use various applications (Text messaging apps and social networking sites like facebook, twitter etc.) over their smart-phones or laptops during classes or self-studies are absent minded and distracted resulting in deteriorated academic performance than students who restrain from these kind of behaviours [2][3][4] apart from above findings, they also observed that students who eagerly involve in use of smart-phones during a lesson take significantly fewer notes and score low on test or quiz. A study entitled "Laptop multitasking hinders classroom learning for both users and nearby peers" conducted [5] examined the multi-tasking of Laptop or smart-phone users and people sitting around them. In their experiment, they studied 44 under-graduate students stated that multi-tasking reduces the comprehensive ability of the students engaged in multi-tasking and students around them during cognitive task or learning. ...
... However, situations involving responsible remembering often involve environments full of distractions (i.e., watching TV while babysitting) and people need to learn and remember important information while distracted. Additionally, although it may be responsible to avoid distractions and allocate all of one's cognitive resources towards remembering the most important information, distractions are often unavoidable and some learners frequently divide their attention despite the well-known effects of divided attention on memory (Fried, 2008;Sana et al., 2013). ...
Article
Full-text available
We are frequently exposed to situations where we need to remember important information when our attentional resources are divided; however, it was previously unclear how divided attention impacts responsible remembering: selective memory for important information to avoid consequences for forgetting. In the present study, we examined participants’ memory for valuable information, metacognitive accuracy, and goal-directed cognitive control mechanisms when under full and divided attention. In Experiment 1, participants were presented with words paired with point values counting towards their score if recalled but were required to “bet” on whether they would remember it. Results revealed that selective memory for high-value information was impaired under divided attention. In Experiment 2, we presented participants with unassociated word pairs and solicited metacognitive predictions of recall (i.e., JOLs). Results revealed that the relative accuracy of participants’ metacognitive judgments was enhanced when studying under divided attention. Experiment 3 examined cognitive control mechanisms to selectively remember goal-relevant information at the expense of information that could potentially be offloaded (i.e., responsible forgetting ). Results revealed that participants’ ability to strategically prioritize goal-relevant information at the expense of information that could be offloaded was preserved under divided attention. Collectively, responsible attention encompasses how attentional resources impact one’s ability to engage in responsible remembering and we demonstrate that responsible remembering can be impaired, enhanced, and preserved in certain contexts.
... Besides, some studies on mobile learning already document several challenges of implementing mobile learning without good teaching practices and methods. For instance, previous studies on m-learning counsel that utilizing mobile devices may cause distractions in the classroom and indifference in interpersonal relationships, especially where appropriate teaching methods and approaches are neglected (Sana et al., 2013;Flanigan and Titsworth, 2020;Kopecky et al., 2021). Additionally, research acknowledges that to boost students' performance in online instruction, mobile learning must be integrated with appropriate instructional strategies and approaches (Ashah, 2018;Sung et al., 2016). ...
Article
Full-text available
Purpose Research advocates for the use of good teaching practices and approaches while integrating technology in digitally enhanced learning. This is on the premise that previous studies on mobile learning have neglected this aspect of technology integration resulting in numerous challenges. Moreover, there is evidence in the literature showing a scarcity of studies on the use of mobile learning in teaching productive skills. On the other hand, linguists recommend the use of responsive lesson design frameworks in language teaching, claiming effectiveness in teaching all language skills. However, responsive lesson design frameworks are yet to be implemented in a classroom setting. To bridge these gaps in scientific literature, our study decides to utilize the CAPE framework as a good teaching method for improving the productive skills of students in mobile-based instruction. Design/methodology/approach This study utilizes a mixed-methods research design with an experimental approach. Post-tests and interviews were employed to elicit information from the student-participants on the objective of the study. Findings Following the analysis of the collected data, notable findings were obtained. While there was evidence to show that the students perceived m-learning as boring and ineffective when incorporated with a traditional lesson framework, our study unveiled that students showed a different perception when incorporated with the CAPE framework. Practical implications Our study unveils that integrating responsive lesson frameworks with m-learning improves the speaking and writing skills of students. Originality/value This study provides empirical evidence to show the role of good teaching practices like integrating responsive LDFs and mobile learning in improving the productive skills of students. This study is the first to investigate the integration of CAPE and mobile learning in enhancing expressive skills.
... In this study, students' achievement on the technical knowledge of basic and procedural indicated that, achievement much better on procedural knowledge compared to basic knowledge. This finding can be justified that student's focus might had been distracted during online theoretical lecture for basic knowledge, since this was their first time to undergo the course with full online learning, which has been associated to multiple variables that might have influenced its effectiveness [19]. On the other hands, student's achievement in the practical work that they have done indicated encouraging findings. ...
Article
Full-text available
This study was to investigate the effectiveness of teaching skills using online medium for solar site survey, as well as students perceived on learning site survey using online medium. A survey study was conducted that involved 30 students who enrolled in a solar PV installation and maintenance course. An online questionnaire consists of eight items using five points Likert Scale were adapted from previous study, gauging students perceived on learning site survey using online medium (i.e. online lecture, WhatsApp, chat, voice, picture, short video clips, and learning management system-LMS). An assignment was used to indicate students’ performance in site survey skills, including a short test to gauge students’ technical skills knowledge, consisting of a total of 10 Multiple Choices Questions (MCQ), were developed by referring to the syllabus, under subtopic of solar site survey. Technical knowledge questions were divided into two types namely, items for testing concept (i.e. tools), principles (i.e. relationship of sunlight and shading) and procedures (i.e. application of knowledge). The result: Student average achievement in technical knowledge test was 68.3% (M = 6.83, SD = 1.64); student average achievement mark in a solar site survey sketch is 62.8%; and students perceived on learning solar PV site survey was at high level (M = 3.99, SD = 0.69). It is concluded that solar PV site survey skills are possible to be implemented from distance through online medium, and it can be improved through a more video packaged in the form of live sessions.
... It has been established that taking notes is beneficial for learn-ing by creating new neural connections that enforce memory. This is true for both lecture and textbook reading note-taking [7]. Students remember more when they take notes and can use that information for testing [8]. ...
Article
Purpose: Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking. Methods: We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources. Results: No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools. Conclusion: In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.
Article
After a decade of adding technology to the classroom, students asking for a laptop ban sent me on a journey of discovery. After a literature review of existing research and a semester of a no-tech policy, I found less tech, not more increases student engagement and learning. Despite more than a dozen studies over the last decade detailing the negative learning effects of laptops in the classroom, the majority of faculty believe that laptop use in class increases learning. I highlight the research findings, explain my experience with the new policy, and provide suggestions on how to attempt your own.
Chapter
Due to the pandemic-induced mobility restrictions, time spent in front of a device, videoconferencing, and, in particular, e-learning increased significantly. Zoom and other video conferencing applications will continue to be part of our everyday life as organizations decide to continue to work remotely in the years to come. Multitasking has been long researched, however, due to the pandemic, it has prompted the need to investigate multitasking in contexts where the primary task is to attend an e-learning video conference session. In general, there has been limited research on the effects of multitasking in remote settings. Moreover, most studies on multitasking have involved the use of diaries or self-reported testimonies for data collection. To fill the abovementioned theoretical void and methodological limitation, we collect physiological data to better understand how task volume and visual attention in a multitasking setting affect learning performance during an e-learning video conference session. Results suggest that multitasking does not affect information retention but it does affect users’ perceived information retention. The practical implications discussed are important for the education sector, as well as, and more broadly, for organizations using video conferencing to communicate and collaborate, as they help explain how multitasking behaviors may affect work productivity and performance.
Article
This study analyzes the effect of local Internet speed infrastructure (backhaul) on educational outcomes. In 2008, the Brazilian government implemented an Internet expansion policy that brought broadband to more than 3,000 municipalities. The policy was designed with implementation criteria that make it a natural experiment that can be investigated through a regression discontinuity design (RDD). The results suggest worsening proficiency, higher dropout and retention among students from municipalities served by more powerful backhauls, i.e., capable of supporting higher connection speeds over fiberoptic lines. These results are paralleled in the empirical literature, which predominantly indicates negative or neutral effects of Internet access on education. This study demonstrates the need for a deeper reflection on the domestic use of the Internet and its consequences on educational outcomes of school-age children and adolescents.
Article
Full-text available
O presente texto pretende responder à seguinte questão: as qualidades atribuídas à geração denominada “nativos digitais” (entre outros termos análogos) realmente existem? Trata-se de uma pesquisa qualitativa, caracterizada como um estudo bibliográfico. O corpus analítico foi constituído por trabalhos que amparam a ideia do surgimento dos “nativos digitais” e sua caracterização, bem como por estudos que contestam as características atribuídas a essa geração. As buscas foram efetivadas a partir de textos de pesquisadores com foco na caracterização dos “nativos digitais” e de outros que desmistificam as qualidades apregoadas a este grupo. A análise do corpus se deu por meio da Análise de Conteúdo, que resultou em duas construções, a partir das categorias emergentes: Nativos Digitais; e Lenda Urbana? Os resultados indicam que muitas das qualidades atribuídas aos “nativos digitais” podem ser consideradas inexistentes. Desse modo, compreende-se que as pessoas que nasceram imersas no mundo das tecnologias digitais de informação e comunicação podem até apresentar maior capacidade intuitiva de interação com os artefatos modernos, porém acredita-se que, no que diz respeito ao emprego desta capacidade para a aprendizagem, ainda carecem de orientação.
Conference Paper
Full-text available
We have recently cast doubt (Craik, Govoni, Naveh-Benjamin, & Anderson, 1996; Naveh-Benjamin, Craik, Guez, & Dori, 1998) on the view that encoding and retrieval processes in human memory are similar. Divided attention at encoding was shown to reduce memory performance significantly, whereas divided attention at retrieval affected memory performance only minimally. In this article we examined this asymmetry further by using more difficult retrieval tasks, which require substantial effort. In one experiment, subjects had to encode and retrieve lists of unfamiliar name-nouns combinations attached to people's photographs, and in the other, subjects had to encode words that were either strong or weak associates of the cues presented with them and then to retrieve those words with either intra- or extra-list cues. The results of both experiments showed that unlike division of attention at encoding, which reduces memory performance markedly division of attention at retrieval has almost no effect on memory performance, but was accompanied by an increase in secondary-task cost. Such findings again illustrated the resiliency of retrieval processes to manipulations involving the withdrawal of attention. We contend that retrieval processes are obligatory or protected, but that they require attentional resources for their execution.
Article
Full-text available
This paper reports on a study that examined the use of wireless laptops for promoting active learning in lecture halls. The study examined students’ behavior in class and their perceptions of the new learning environment throughout three consecutive semesters. An online survey revealed that students have highly positive perceptions about the use of wireless laptops, but less positive perceptions about being active in class. Class observations showed that the use of wireless laptops enhances student-centered, hands-on, and exploratory learning as well as meaningful student-to-student and student-to-instructor interactions. However, findings also show that wireless laptops can become a source of distraction, if used for non-learning purposes.
Article
Full-text available
In 3 empirical studies we examined the computer technology needs and concerns of close to 800 college and university students with various disabilities. Findings indicate that the overwhelm- ing majority of these students used computers, but that almost half needed some type of adaptation to use computers effectively. Data provided by the students and by a small sample of professors underscore the importance of universal design in a variety of areas: courseware development, electronic teaching and learning materials, and campus information techno- logy infrastructure. Sex and age of students were only minimally related to attitudes toward computers or their use in our samples. Key findings summarize the problems faced by students with different disabilities as well as the computer related adaptations that are seen as helpful. These are used to formulate concrete, practical recommendations for faculty to help them ensure full access to their courses.
Book
Early work on attention selective report and interference effects in visual attention the nature of visual attenion combining the attributes of objects and visual search selection for action task combination and divided attention automaticity, skill and expertise intentional control and willed behaviour the problem of consciousness deficits of attention consciousness and control.