
Gavin F Revie- PhD
- Higher education consultant at Bezalel Education
Gavin F Revie
- PhD
- Higher education consultant at Bezalel Education
Higher education consultant with Bezalel Education. I provide training on research methods and statistics.
About
18
Publications
48,312
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
274
Citations
Introduction
Current institution
Bezalel Education
Current position
- Higher education consultant
Additional affiliations
February 2015 - present
Publications
Publications (18)
Objective
To determine if re-establishment of occlusal contact was achieved within 6 months after insertion of a fixed anterior bite plane in individuals with Class II division 2 incisor relationship, and to evaluate the occlusal and vertical skeletal changes and acceptance of the intra-oral scanner and bite plane.
Design
Single-centre two-arm par...
Learning suturing skills is an important area of the undergraduate curriculum and ideally requires realistic and anatomically accurate surgical training models to prepare students for treating patients. Little is currently understood regarding which model might be perceived by students to be the best or which might most effectively facilitate their...
Background:
Cleft lip with cleft palate (CLP) is a congenital condition that affects both the oral cavity and the lips. This study estimated the prevalence and mortality of CLP using surveillance data collected from birth defect registries around the world.
Methods:
Data from 22 population- and hospital-based surveillance programs affiliated wit...
Background:
The Wikipedia Collaboration of Dental Schools (WCODS) is a student-led initiative that aims to publish high quality scientific, evidence-based dental content on the Wikipedia online encyclopaedia by equipping its members to use research, critical appraisal and writing skills to create accurate content. In 2019, the Collaboration launch...
Radiographic methods using pulp-tooth volume ratio (PTVR) are important for dental age estimation. According to previous studies, using PTVR possesses different relationships with age in males and females but none of the studies have used a homogenous (approxi-mately equal numbers of individuals in each age range) age distribution to assess this re...
Photographs of a person smiling may provide valuable information about their anterior dentition. These images can be an alternative ante-mortem (AM) dental source in cases with no dental records, which gives the forensic odontologist a significant opportunity for comparative dental analysis. There are no reported studies that have investigated the...
The properties of the skin and the posture of the body during photographic recording are factors that cause distortion in the bite mark injury. This study aimed to explore the degree of distortion between a ‘touch mark’ (method 1) and a ‘bite mark’ (method 2) on the left upper arm at three different positions (arm relaxed; arm flexed in two differe...
Human Identification is an important part of criminal investigation, and a way to express the respect of the legal rights of the dead. . Sex estimation is the keystone of profiling. Skeletally, the skull is the best sex indicator after pelvis. Some metric studies of sexing the mandible explore linear and areal measurements; however, these measureme...
There has been a significant expansion in the use of 3-dimensional (3D) dental images in recent years. In the field of forensic odontology, an automated 3D dental identification system could enhance the identification process. This study presents a novel method for automated human dental identification using 3D digital dental data by utilising a de...
Secondary dentine deposition is responsible for the decrease in the volume of the pulp cavity with age. Therefore, the volume of the pulp cavity can be considered as a predictor for estimating age. The aims of this study were to investigate the relationship strength between canine pulp volumes and chronological age from homogenous (approximately eq...
The identification of skeletonized remains requires sex estimation. After the pelvis, skull is considered the best sex predictor in the human skeleton. Lateral cephalograms provide details of the skull’s morphology and previous studies have investigated sex analysing calibrated lateral cephalograms of adults aged 20 – 55 years. Due to the lack of s...
Over the years forensic odontology has evolved from using old outdated methods to the techniques that are used today which have resulted from the advances in medical science. This study is a retrospective documentary analysis of 162 forensic dental cases from the late 1960s to early 2000s in Scotland. The main objective was to collect, catalogue an...
Introduction
Discrepancy of the upper dental midline to the facial midline plays an important role in smile aesthetic assessment. This study presents different reference points to quantify the deviation of upper dental midline to the facial midline in 2D frontal photographs. The aim was to find the most accurate, precise, and practical reference po...
Summary
Objective
To compare treatment duration between 0.018-inch and 0.022-inch slot systems and determine factors influencing treatment duration.
Subjects and methods
Eligible participants aged 12 years or over were allocated to the 0.018-inch or 0.022-inch slot MBT appliance (3M-Unitek, Monrovia, California, USA) using block randomization i...
Summary
Objective
To compare the quality of orthodontic treatment between 0.018-inch and 0.022-inch slot bracket systems.
Subjects and methods
Eligible participants aged 12 years or over were allocated to the 0.018-inch or 0.022-inch slot MBT appliance (3M-Unitek, Monrovia, California, USA) using block randomization in groups of 10. Outcome mea...
Summary
Objective
To compare orthodontically induced inflammatory root resorption (OIIRR) and patient perception of pain during orthodontic treatment between 0.018-inch and 0.022-inch slot bracket systems.
Subjects and methods
Eligible participants aged 12 years or above were allocated to treatment with the 0.018-inch or 0.022-inch slot MBT app...
This experiment sought to explore the theory that familiar English words are processed similarly to objects. To do this, we looked for object-based attentional facilitation where cues in a different location to the target still facilitate target detection as long as they are inside the same object. Participants were shown two English words in an ar...
We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects success...
Questions
Questions (5)
I have been analyzing examination data from students and have been tasked with identifying whether any of the examiners were unduly harsh or lenient. The problem I've been having is that the design of the exam is poorly suited for identifying harsh examiners. Students proceed round 10 stations which each have a different examiner providing the only grade for that station. Furthermore, no one examiner saw all of the students. Different examiners were assigned to the same station at different points during the exam. This leaves us in the situation where the variation in the students performance was due to
- Changes in student ability
- Changes in the station they're on
- Changes in the examiner they're being assessed by
These different types of variance are all intermixed and I don't see an easy way of separating them. Someone mentioned Rasch analysis to me, although I struggled to find a user friendly tutorial for it. As far as I can tell it is about modelling the student's ability as a latent trait, and I'm not sure how that would help me with the problem of identifying harsh examiners.
What I have done for this year's class was a compromise. I plotted the mean score across all stations awarded by each individual examiner and then converted them to z-scores so that I could identify which examiners tended to issue harsher or more lenient scores. In one case I found an examiner whose average score awarded was -4 standard deviations from the overall mean. However on closer examination it was found that the specific students they had assessed were found to be quite poor in other areas as well.
For next year's exam I was considering a different approach. I was thinking of creating 10 regression models with one using each station's score as the outcome variable and all the other stations as the predictors. That is, for each station I would model what the student's score should have been based on their performance at other stations. I will then create standardised residuals for each student at each station and see how far their performance on any one station is from what their overall performance says it should have been. Because I'll only be interested in modeling my own specific dataset most of the usual assumptions you worry about with linear regression wont apply since I don't care about generalisability. With this approach I would simply make a list of the examiners involved in scores that appear "extreme" relative to the rest of that student's performance. If the same examiners keep appearing on the list, it is a fair bet that they are out of step with the other examiners in how they grade. Any one extreme score could simply be the result of a student doing poorly on an individual station, but a pattern of extreme scores attributed to a single examiner would be suggestive of a problem with their marking.
I was concerned that I might be reinventing the wheel here and that there may already be codified ways of dealing with this problem (for example the aforementioned Rasch modelling or some sort of LME model). I'm fairly new to modeling approaches any more sophisticated than normal regression. Accordingly I wanted to pick the brains of the experts on here.
What do you think:
Is what I'm proposing a good way of dealing with this problem (without redesigning the exam of course!)?
Is there a better way or a pre-existing way that achieves the same thing?
Thank you for any help you can provide.
I am currently working through Andy Field's "Discovering Statistics Using R". I've just got to chapter 14 "Mixed designs". He's talking about how you can use an LME in much the same way as an ANOVA.
It's all pretty clear but there's a bit where he's talking about applying contrast weightings (pages 617 and 618) where he's lost me. All throughout this book so far contrast weightings have been described in what I would consider a pretty standard way: 0 means an item is not included in a particular contrast, and positive items are compared with negative items. So if you'd wanted compare group 1 with group 3 while ignoring group 2, you'd use weightings like (-1, 0, 1).
However here Field talks about if an item is always coded as a 0, it acts as a "baseline condition". I don't get this. If it's coded as a 0, surely it won't be included in any analysis at all?
Essentially we have a situation where we wish to compare the first group against the second, and another situation where we wish to compare the 3rd group against the second. I enclose some of my code with my rather confused annotations as well.
# Mixed design as a GLM
# Setting contrasts
# Here they don't have to be orthogonal and you can see the output of the contrast
# I'm a bit unclear on Field's logic at this point since usually setting a contrast
# weight of 0 indicates that a particular condition is not included in this particular
# analysis, whereas here he seems to suggest that setting a weight of 0 indicates something
# forms part of the baseline condition.
# Something specific to LMEs perhaps?
# Contrasts for looks
AttractivevsAv <- c(1,0,0) # why isn't this c(1,-1,0)???
UglyvsAv <- c(0,0,1) # why isn't this c(0,-1, 1)??
contrasts(speedData$looks) <- cbind(AttractivevsAv, UglyvsAv)
attr(speedData$looks, "contrasts")
# Contrasts for personality
HighvsAv <- c(1,0,0)
DullvsAv <- c(0,0,1)
contrasts(speedData$personality) <- cbind(HighvsAv, DullvsAv)
attr(speedData$personality, "contrasts")
I am assisting a student with a measure of agreement for test-retest data using an ICC (2, 1) for absolute agreement. The data is ordinal with a 5 point scale, and according to Streiner et al 2015's book, the ICC is superior in these situations than (for example) kappa or weighted kappa.
We've encountered a situation where despite nearly all scores being identical between test 1 and test 2, we have an ICC of .000. As far as I can tell this is because there is almost no variation in the scores. For both test 1 and test 2, nearly all of the answers given are "2". I've queried with the student whether this suggests that there was something wrong with the question in the first instance, since if you ask a question and everyone answers identically, was it really worth asking in the first place?
However what I'd like to know is WHY the ICC is 0 despite nearly all responses agreeing. My understanding is that ICC is a version of correlation adjusted for the fact that the different variables are measuring the same thing. Correlation looks at associations between two variables. If when plotted on a scatter plot scores on both variables are, with very few exceptions, all "2", then it is not possible to model a relationship between the two variables. They're not associated with one another, they're functionally identical.
Is this correct? Thank you for any help you can provide.
I have a student who has come to me with a problem. He is creating a Gaussian GLM model (which I believe is the same thing as running a standard LM?). He needs to be able to generate predictions using this model and wishes to be able to present the actual regression equation used.
Since it is a Gaussian model I was pretty sure it was using the standard Y = Intercept + X*Slope equation, but we're having difficulties confirming this. Using the predict() function we generated some predictions for different participants in his dataset and we're struggling to manually compute the same results. The difficulty is that he has some predictors which are coded as ordinal factors which I believe R automatically dummy codes, but I'm unsure about what level of each factor is represented by each coefficient in the summary output table.
I'm not at all familar with the GLM function outside of Logistic Regression. Is there a way to get R to categorically show you what regression equation it is using and which coefficients go with which predictor variables?
Traditionally in linear regression your predictors must either be continuous or binary. Ordinal variables are often inserted using a dummy coding scheme. This is equivalent to conducting an ANOVA and the baseline ordinal level will be represented by the intercept.
However what if you want to plug further predictors into your model, some continuous and some also dummy coded ordinal? The rationale for the dummy coding does not make sense to me if there are other predictors also present. What does the baseline category represent in this scenario?
Furthermore, if you are constructing a minimal adequate model, you may end up in the situation where you are dropping some levels of your ordinal predictor but not others. This seems tantamount to saying that being labelled a "5" is informative but being labeled a "7" is not. Where you're only using the ordinal predictor I'd say this probably makes sense, but again when it is mixed in with many other predictors I'm uncertain of the rationale.
In cases where you have many levels to your ordinal predictor (in my case 18) some have advised me to treat it as "pseduo-continuous" which I'm wary of.
Can anyone offer me some better advice?