FIGURE 1 - uploaded by Elizabeth Burchell
Content may be subject to copyright.
| Violin-plot of children's raw Raven's Colored Progressive Matrices (RCPM) score assessed online and face-to-face.
Source publication
There has been an increase in cognitive assessment via the Internet, especially since the coronavirus disease 2019 surged the need for remote psychological assessment. This is the first study to investigate the appropriability of conducting cognitive assessments online with children with a neurodevelopmental condition and intellectual disability, n...
Contexts in source publication
Context 1
... children assessed face-to-face. Descriptive violin-plots (similar to boxplots, but with the envelope width indicating frequency in different value ranges) show the distribution and variability of children's raw RCPM (Figure 1) and BPVS (Figure 2). For the RCPM assessments, the violin-plots show similar variability of scores. ...
Context 2
... children assessed face-to-face. Descriptive violin-plots (similar to boxplots, but with the envelope width indicating frequency in different value ranges) show the distribution and variability of children's raw RCPM (Figure 1) and BPVS (Figure 2). For the RCPM assessments, the violin-plots show similar variability of scores. ...
Citations
... Majority of the studies, with a few exceptions (e.g. Ashworth et al., 2021;Elmehdi and Ibrahem, 2019;S anchez-Cabrero et al., 2021;Spivey and McMillan, 2014) are focussed on adoption and perception (e.g. Nikou and Economides, 2016;Or and Chapman, 2022), students' experiences (e.g. ...
... Recent studies also remain inconclusive. S anchez-Cabrero et al. (2021) show that online assessments led to a 10% increase in performance while Ashworth et al. (2021) find no performance differences between online and paper-based assessments. Given the contradictory findings of the existing literature, there is currently inconclusive evidence on the impact of online assessment on students' academic performance. ...
... Moreover, studies (e.g. Ashworth et al., 2021;S anchez-Cabrero et al., 2021;Spivey and McMillan, 2014) that evaluate the impact of online assessment have largely relied on assessment results without complementing these with the views of students. Given that students' opinions matter regarding how they are assessed, it is important to evaluate if there is divergence in students' views and their performance to advance the ongoing debate on online and paper-based assessments. ...
Purpose
With the outbreak of the COVID-19 pandemic, online assessment has become the dominant mode of examination in higher education institutions. However, there are contradictory findings on how students perceive online assessment and its impact on their academic performance. Thus, the purpose of this study is to evaluate the potential impact of online assessment on students' academic performance.
Design/methodology/approach
This study proposes a research model based on the task–technology fit theory and empirically validates the model using a survey from students in the UK. In addition, the study conducted four experiments based on paper-based and online assessments and analysed the data using paired sample t test and structural equation modelling.
Findings
The findings show that the use of online assessment has a positive impact on students' academic performance. Similarly, the results from the experiment also indicate that students perform better using online assessments than paper-based assessments.
Practical implications
The findings provide crucial evidence needed to shape policy towards institutionalising online assessment. In addition, the findings provide assurance to students, academics, administrators and policymakers that carefully designed online assessments can improve students' academic performance. Moreover, the study also provides important insights for curriculum redesign towards transitioning to online assessment in higher education institutions.
Originality/value
This study advances research by offering a more nuanced understanding of online assessment on students' academic performance since the majority of previous studies have offered contradictory findings. In addition, the study moves beyond existing research by complementing assessment results with the views of students in evaluating the impact of online assessment on their academic performance. Second, the study develops and validates a research model that explains how the fits between technology and assessment tasks influence students' academic performance. Lastly, the study provides evidence to support the wide use of online assessment in higher education.
... Findings indicate that speech and language characteristics (e.g., mean length utterance, number of different words) among toddlers during play with a parent (Manning et al., 2020) and performance on a standardized language assessment among school-age children with language impairment (Sutherland et al., 2017) showed good feasibility, and reliability and/or validity of assessments did not differ significantly from data collected during face-to-face sessions. Ashworth et al. (2021), however, reported significantly higher verbal performance (assessed via the British Picture Vocabulary Scale, Third Edition [BPVS-3]) during online virtual visits vs. laboratory visits among school-aged children with Williams syndrome. Although these past studies indicate utility of conducting language assessments among toddlers and schoolaged children via a virtual visit platform, we are unaware of prior work that has compared infant language assessments conducted via a synchronous virtual visit vs. an in-person laboratory format. ...
The COVID-19 pandemic has necessitated innovations in data collection protocols, including use of virtual or remote visits. Although developmental scientists used virtual visits prior to COVID-19, validation of virtual assessments of infant socioemotional and language development are lacking. We aimed to fill this gap by validating a virtual visit protocol that assesses mother and infant behavior during the Still Face Paradigm (SFP) and infant receptive and expressive communication using the Bayley-III Screening Test. Validation was accomplished through comparisons of data (i.e., proportions of missing data for a given task; observed infant and maternal behaviors) collected during in-person laboratory visits and virtual visits conducted via Zoom. Of the 119 mother-infant dyads who participated, 73 participated in lab visits only, 13 participated in virtual visits only, and 33 dyads participated in a combination of lab and virtual visits across four time points (3, 6, 9, and 12 months). Maternal perspectives of, and preferences for, virtual visits were also assessed. Proportions of missing data were higher during virtual visits, particularly for assessments of infant receptive communication. Nonetheless, comparisons of virtual and laboratory visits within a given time point (3, 6, or 9 months) indicated that mothers and infants showed similar proportions of facial expressions, vocalizations and directions of gaze during the SFP and infants showed similar and expected patterns of behavioral change across SFP episodes. Infants also demonstrated comparable expressive and receptive communicative abilities across virtual and laboratory assessments. Maternal reports of ease and preference for virtual visits varied by infant age, with mothers of 12-month-old infants reporting, on average, less ease of virtual visits and a preference for in-person visits. Results are discussed in terms of feasibility and validity of virtual visits for assessing infant socioemotional and language development, and broader advantages and disadvantages of virtual visits are also considered.
Objectives
We explored whether adapting neuropsychological tests for online administration during the COVID-19 pandemic was feasible for dementia research.
Design
We used a longitudinal design for healthy controls, who completed face-to-face assessments 3–4 years before remote assessments. For patients, we used a cross-sectional design, contrasting a prospective remote cohort with a retrospective face-to-face cohort matched for age/education/severity.
Setting
Remote assessments were conducted using video-conferencing/online testing platforms, with participants using a personal computer/tablet at home. Face-to-face assessments were conducted in testing rooms at our research centre.
Participants
The remote cohort comprised 25 patients (n=8 Alzheimer’s disease (AD); n=3 behavioural variant frontotemporal dementia (bvFTD); n=4 semantic dementia (SD); n=5 progressive non-fluent aphasia (PNFA); n=5 logopenic aphasia (LPA)). The face-to-face patient cohort comprised 64 patients (n=25 AD; n=12 bvFTD; n=9 SD; n=12 PNFA; n=6 LPA). Ten controls who previously participated in face-to-face research also took part remotely.
Outcome measures
The outcome measures comprised the strength of evidence under a Bayesian framework for differences in performances between testing environments on general neuropsychological and neurolinguistic measures.
Results
There was substantial evidence suggesting no difference across environments in both the healthy control and combined patient cohorts (including measures of working memory, single-word comprehension, arithmetic and naming; Bayes Factors (BF) 01 >3), in the healthy control group alone (including measures of letter/category fluency, semantic knowledge and bisyllabic word repetition; all BF 01 >3), and in the combined patient cohort alone (including measures of working memory, episodic memory, short-term verbal memory, visual perception, non-word reading, sentence comprehension and bisyllabic/trisyllabic word repetition; all BF 01 >3). In the control cohort alone, there was substantial evidence in support of a difference across environments for tests of visual perception (BF 01 =0.0404) and monosyllabic word repetition (BF 01 =0.0487).
Conclusions
Our findings suggest that remote delivery of neuropsychological tests for dementia research is feasible.
Objectives
We explored whether adapting traditional neuropsychological tests for online administration against the backdrop of COVID-19 was feasible for people with diverse forms of dementia and healthy older controls. We compared face-to-face and remote settings to ascertain whether remote administration affected performance.
Design
We used a longitudinal design for healthy older controls who completed face-to-face neuropsychological assessments between three and four years before taking part remotely. For patients, we used a cross-sectional design, contrasting a prospective remote cohort with a retrospective face-to-face cohort matched in age, education, and disease duration.
Setting
Remote assessments were performed using video-conferencing and online testing platforms, with participants using a personal computer or tablet and situated in a quiet room in their own home. Face-to-face assessments were carried out in dedicated testing rooms in our research centre.
Participants
The remote cohort comprised ten healthy older controls (also seen face-to-face 3-4 years previously) and 25 patients (n=8 Alzheimer’s disease (AD); n=3 behavioural variant frontotemporal dementia (bvFTD); n=4 semantic dementia (SD); n=5 progressive nonfluent aphasia (PNFA); n=5 logopenic aphasia (LPA)). The face-to-face patient cohort comprised 64 patients (n=25 AD; n=12 bvFTD; n=9 SD; n=12 PNFA; n=6 LPA).
Primary and secondary outcome measures
The outcome measures comprised the strength of evidence under a Bayesian analytic framework for differences in performances between face-to-face and remote testing environments on a general neuropsychological (primary outcomes) and neurolingustic battery (secondary outcomes).
Results
There was evidence to suggest comparable performance across testing environments for all participant groups, for a range of neuropsychological tasks across both batteries.
Conclusions
Our findings suggest that remote delivery of neuropsychological tests for dementia research is feasible.
Strengths and limitations of this study
Methodological strengths of this study include
Diverse patient cohorts representing rare dementias with specific communication difficulties
Sampling of diverse and relevant neuropsychological domains
Use of Bayesian statistics to quantify the strength of evidence for the putative null hypothesis (no effect between remote and face-to-face testing)
Limitations include
Relatively small cohort sizes
Lack of direct head-to-head comparisons of test environment in the same patients