The Number Sets Test was developed to assess the speed and accuracy with which children can identify and process quantities represented by Arabic numerals and object sets. The utility of this test for predicting mathematics achievement and risk for mathematical learning disability (MLD) was assessed for a sample of 223 children. A signal detection analysis of first grade Number Sets Test scores provided measures of children's sensitivity to number and their response bias. The sensitivity measure, d', but not the response bias measure was predictive of third grade mathematics achievement scores, above and beyond the influence of intelligence, working memory, and first grade achievement scores. Further analyses assessed the sensitivity and specificity of the test and revealed that first grade d' scores identified 2 out of 3 children diagnosed as MLD in third grade and correctly identified about 9 out of 10 children who were not at risk for MLD.
The current study used confirmatory factor analysis techniques to investigate the construct validity of the child version of the School Refusal Assessment Scale - Revised (SRAS-R) in a community sample of low socioeconomic status, urban, African American fifth and sixth graders (n = 174). The SRAS-R is the best-researched measure of school refusal behavior in youth and typically yields four functional dimensions. Results of the investigation suggested that a modified version of the four-factor model, in which three items from the tangible reinforcement dimension are removed, may have construct validity in the current sample of youth. In addition, youth endorsement of the dimension measuring avoidance of social and/or evaluative situations was positively associated with unexcused absences. Implications for further psychometric research and early identification and prevention of problematic absenteeism in low-SES, ethnic minority community samples are highlighted.
Mathematical learning disabilities (MLDs) are often associated with math anxiety, yet until now, very little is known about the causal relations between calculation ability and math anxiety during early primary school years. The main aim of this study was to longitudinally investigate the relationship between calculation ability, self-reported evaluation of mathematics, and math anxiety in 140 primary school children between the end of first grade and the middle of third grade. Structural equation modeling revealed a strong influence of calculation ability and math anxiety on the evaluation of mathematics but no effect of math anxiety on calculation ability or vice versa-contrasting with the frequent clinical reports of math anxiety even in very young MLD children. To summarize, our study is a first step toward a better understanding of the link between math anxiety and math performance in early primary school years performance during typical and atypical courses of development.
tested the hypothesis that severely language-impaired students could be affected disproportionately by the revised language of the Wechsler Intelligence Scale for Children-Third Edition [WISC-III] / the performance of severely language-impaired [2nd–6th grade] students, age 8–13 yrs on the WISC-III was examined through comparison of differences among Verbal, Performance, and Full Scale IQs and scaled subtest score ranges for the WISC-III and Wechsler Intelligence Scale for Children--Revised (WISC--R) / scores from both Wechsler scales and measures of language and academic skills were compared (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Argues that the revised edition of the Wide Range Achievement Test (WRAT—R) is not very different from its predecessors and continues to embody some of the worst characteristics and attributes of the testing industry. A major improvement in the standardization sample along with minor changes in format in item content are reported. It is concluded that unless a "quick and dirty" assessment of achievement is desired, there is no reason to use the WRAT—R. (2 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
the percentages of children [6–16 yr olds] with various WISC-III [Wechsler Intelligence Scale for Children-III] subtest patterns were determined for a learning-disabled sample, a separate attention deficit hyperactivity disorder (ADHD) sample, and the WISC-III standardization sample / the subtest patterns included the ACID [arithmetic, coding, information, and digit span] profile, the Bannatyne profile, and other related patterns that were suggested by the research literature / utility of the subtest patterns in diagnostic decisions is discussed (PsycINFO Database Record (c) 2012 APA, all rights reserved)
In this article, the author reviews the Reynolds Intellectual Assessment Scales (RIAS), an individually administered test of intelligence appropriate for ages 3 through 94 years with a conormed, supplemental measure of memory. The RIAS should be administered by examiners who have formal training in assessment. In this regard, the RIAS is a restricted purchase product, and proof of training is required for its purchase. The RIAS can be used, for example, as a comprehensive measure of verbal and nonverbal intelligence and of global intelligence; as a measure of memory of meaningful verbal material and visual memory; and for the purposes of diagnosis of various childhood disorders (e.g., learning disabilities, mental retardation, and giftedness). Moreover, the RIAS can be useful for researchers who need a measurement of intelligence in their research design. The author finds the RIAS to be a reliable and valid measure of intelligence for children and adults and a nice addition to measures of intelligence presently available. Nevertheless, additional research in clinical settings is needed. The RIAS should be well received in the United States because of its sound psychometric properties, clinical and empirical utility, ease of administration, scoring, and interpretation, nondependence on motor coordination, visual-motor speed, and reading. The RIAS should be normed in other countries (e.g., Canada) to increase its applicability and appeal in other countries.
In this article, the author reviews the Battelle Developmental Inventory, 2nd edition (BDI-2), a criterion-referenced, individually administered, standardized assessment used to measure the developmental skills in children aged birth through 7 years, 11 months. The BDI-2 is composed of 450 items grouped into five domains (Adaptive, Personal/Social, Communication, Motor, and Cognitive) and a 100-item Screening Test. It can be administered in either English or Spanish. The BDI-2 is a revision from the Battelle Developmental Inventory (BDI). Changes from the BDI include reducing the number of subdomains and repositioning the items that were in removed subdomains, expanding the normative sample, expanding the normative tables, inclusion of a student workbook for all items requiring a response from the child, revised basal and ceiling rules to a uniform three consecutive 2-point responses for basal and three consecutive 0-point responses for ceiling, and improving the visual stimulus materials. The BDI-2 is primarily designed for use by preschool, kindergarten, and primary school teachers, although many other professionals may also find the BDI-2 useful in measuring the functional abilities of young children. The author finds that reliability data are strong, and validity data indicate moderate correlations with other established tests. Although the Screening Test consists of items that had the highest correlations to total test score, separate reliability and validity studies would have been helpful to fully determine its adequacy and appropriate use. Strengths of the BDI-2 include the adaptive method of administration, coverage of a wide range of skills, and child-friendly materials. The manipulatives are inviting for the children and make it quite easy to get them to engage in the tasks. In general, the BDI-2 is an acceptable and useful assessment to administer.
In this article, the authors review the Wechsler Intelligence Scale for Children-Fourth Edition Spanish (WISC-IV Spanish), a Spanish translation and adaptation of the WISC-IV. The test was developed to measure the intellectual ability of Spanish-speaking children in the United States ages 6 years, 0 months, through 16 years, 11 months. These children are presumed to be learning English as a second language, as well as acculturating to the U.S. educational system. It is intended for testing children with no more than five consecutive years in the U.S. educational system (in which case the English-language version is recommended). The WISC-IV Spanish was designed to be individually administered by school psychologists, educational diagnosticians, clinical psychologists, or neuropsychologists who speak both English and Spanish. In addition, it is recommended for examiners who have experience in assessing Spanish-speaking children. The WISC-IV Spanish presents three essential changes to the WISC-IV to make the test more useful for examiners who serve Spanish-speaking children in the United States: (a) a verified Spanish translation of directions; (b) item modifications, ranging from exact item retention to completely new replacement items; and (c) special norms that provide percentiles (but not standard scores) drawn from Spanish-speaking U.S. children, adjusted for parental SES and/or years of schooling in the United States. The authors, although disappointed with some features, also appreciate that the WISC-IV Spanish is a significant advance over other, undocumented methods of assessment (e.g., informal translations, elimination of verbal items). Therefore, they conclude that as the school-age population of the United States is increasingly immigrant and Spanish speaking, examiners who serve this population will find the availability of a formally developed and clearly articulated Spanish language test of intellectual abilities to be a welcome tool. (Contains 1 figure.)
This article provides a review of the Wechsler Nonverbal Scale of Ability (WNV), a general cognitive ability assessment tool for individuals' aged 4 year 0 months through 21 years 11 months with English language and/or communicative limitations. The test targets a population whose performance on intelligence batteries might be compromised by standard verbal requirements. Specifically, it is intended for use with individuals from diverse backgrounds as well as those who have any type of language limitation due to, but not limited to, the presence of autistic disorders, developmental delays, instructional challenges, environmental challenges (i.e., lack of education), deafness or hard of hearing, and language-based learning disabilities including speech impairment and selective mutism (Wechsler & Naglieri, 2006). The theoretical foundation of the Wechsler scale is based on the premise that there is a single factor called "g" that can and should be measured using multiple domains to lead to a single measurement of general cognitive abilities; these are not differing types of intelligence, but rather different ways of measuring "g" (, 1958, as cited by Wechsler & Naglieri, 2006). Understanding this theoretical framework is important because Wechsler and Naglieri (2006) argued that given that both verbal and performance are equally valid indicators of general cognitive abilities, the WNV is by default as legitimate a valid means of measuring the "g" factor as any other test that chooses different indicators (i.e., verbal abilities). The WNV aligns with specific components or domains of Cattell-Horn-Carroll (CHC; Carroll, 1993) theory, but does not try to cover all CHC components.
This article presents a review of the Differential Ability Scales-Second Edition (DAS-II), an individually administered cognitive test battery, designed to evaluate children ages 2 years 6 months to 17 years 11 months. It purports to measure a hierarchy of cognitive abilities, including broad abilities contributing to a single cognitive factor (g), clusters of skills (i.e. verbal, nonverbal reasoning, and spatial), and a variety of homogeneous diagnostic subtests. Designed for both classification and to identify within-person strengths and needs, the DAS-II is theoretically based on a hierarchical view of mental abilities, representing a range of cognitive theories but with clear reference to the Cattell-Horn-Carroll theory. The DAS-II measures a range of types of ability, as opposed to one specific theory of human cognition. The test is designed to measure an individual's general conceptual and reasoning ability, along with specific and diverse abilities, to determine strengths and weaknesses of cognitive functioning. The DAS-II assesses the populations intended by the authors and has been extended to include younger and older children (based on ability), and children who have speech/language impairments, or are deaf or hard-of-hearing. The DAS-II allows for quick administration and engaging materials that make it especially appealing to young children. Examiners may find the scoring procedures to be tedious; however, the computerized scoring assistant may help with this issue. Overall, the DAS-II may provide a user friendly, time efficient measure of general cognitive ability that proves useful in the context of a full psychoeducational battery of assessment measures.
"Conners 3rd Edition" is the most updated version of a series of measures for assessing attention deficit hyperactivity disorder (ADHD) and common comorbid problems/disorders in children and adolescents ranging from 6 to 18 years of age. Related problems that the test helps assess include executive dysfunction, learning problems, aggression, and problems with peer/family relations. Diagnostic criteria for the disruptive behavior disorders are also part of the scales. The test consists of self-report, parent, and teacher questionnaires and items that are based largely on the American Psychiatric Association's "Diagnostic and Statistical Manual, 4th Edition, Text Revision (DSM-IV-TR)" and principles of the "International Statistical Classification of Diseases and Health-Related Problems" (ICD). Theoretical foundations based on specific features of ADHD include emotional, social, cognitive, behavioral, sensorimotor, adaptive functioning, and treatment aspects. The test authors focused on clear identification of these features and especially noted positive impacts of ADHD, such as enthusiasm and creativity. (Contains 1 table.)
The Brief Academic Competence Evaluation Screening System (BACESS) is a multiphase universal screening measure designed to assist educators in the identification of students who are likely to experience learning difficulties in elementary school. This study evaluated the reliability and validity of the measure for this purpose. The BACESS was used in 25 elementary classrooms in Wisconsin, and the entire sample included 285 students. The phases of the BACESS were each found to be highly reliable for their respective numbers of items. Internal structure evidence indicated that the phases functioned well together. The BACESS was found to share good concurrent validity with achievement test proficiency, approaching 0.70 on Bayesian conditional probability analysis. Receiver Operating Characteristic (ROC) analysis supported the use of the BACESS, incorporating different cutoff rules in different academic environments. Feedback via an evaluation survey indicated teacher opinion that the information gained from using the BACESS was valuable. (Contains 1 figure and 5 tables.)
The Young Adult Social Behavior Scale was developed for the purpose of measuring self-reported relational and social aggression and behaviors of interpersonal maturity in adolescents and young adults (the sample included 629 university students; 66% female; 91.6% White). Despite previous research suggesting that relational and social aggression comprise a single paradigm, there is emerging evidence that indirect, social, and relational aggression are, in fact, separate constructs. In accordance with this more recent research, in this study, confirmatory factor analysis supports that the Young Adult Social Behavior Scale measures three internally consistent constructs: relationally aggressive behaviors, socially aggressive behaviors, and interpersonally mature behaviors. (Contains 5 tables and 1 figure.)
Problem solving is a key component of weight loss programs. The Social Problem Solving Inventory-Revised (SPSI-R) has not been evaluated in weight loss studies. The purpose of this study was to evaluate the psychometrics of the SPSI-R. Cronbach's alpha (.95 for total score; .67 - .92 for subscales) confirmed internal consistency reliability. The SPSI-R score was significantly associated (ps<.05) with decreased eating barriers and binge eating, increased self-efficacy in following a cholesterol-lowering diet, consumption of fewer calories and fat grams, more frequent exercise, lower psychological distress, and higher mental quality of life; all suggesting concurrent validity with other instruments used in weight loss studies. However, confirmatory factor analysis of the hypothesized 5-factor structure did not fit the data well (χ(2)=350, p<.001).
The construct validity of the Australian version of the Multidimensional School Anger Inventory–Revised (MSAI-R) was examined using exploratory factor analysis (EFA), Rasch analysis, and confirmatory factor analysis (CFA) on a sample of 1,400 Australian students enrolled in Years 8 through 12. The EFA revealed a strong replication of the MSAI-R's internal consistency and structure with the original four MSAI-R factors identified: Anger Experience, Hostility, Destructive Expression, and Positive Coping. However, the Rasch analysis revealed that some additional items could be added to the destructive anger expression and positive coping subscales to enhance their reliability. A CFA was also conducted to compare the four-factor and a one-factor model. The results indicated a good model fit for the four-factor solution apart from one index. It is concluded that the results lend further support to the MSAI-R's construct validity and support its use as a general survey and evaluation research instrument.
This article presents a review of the "Process Assessment of the Learner-Second Edition" (PAL-II), an individual or group-administered instrument designed to assess the cognitive processes involved in academic tasks in kindergarten through sixth grade. The instrument allows the examiner to identify reasons for underachievement and connect these deficiencies to interventions. The PAL-II contains two separate assessments: (1) one for reading and writing (PAL-II RW); and (2) one for math (PAL-II M). The PAL-II is based on the research of the author, Dr. Virginia Wise Berninger, and is an extension of the PAL, which was published in 2001 (Berninger, 2001). The PAL-II is an expanded version of the PAL and shows the growth of Berninger's research in the area of individual cognitive processes that interact with both instructional materials and instructional pedagogy. Improvements to the PAL-II include the creation of additional subtests, modification of previous subtests, and the addition of a User's Guide in CD format. It also has been made easier to score, includes composite and scaled scores, has a reformatted Stimulus Book, and has improved psychometrics. Overall, the PAL-II is a useful tool that can be used for both clinical utility and research purposes. The PAL-II was designed for the purpose of being used as part of the three-tier model for intervention and assessment. School professionals are encouraged to use the PAL-II as a screener, an assessment to help diagnose a specific learning disability, and a resource for interventions, particularly in reading. Although the whole reading or math test may be too long for many research studies, subtests could be valuable in gaining a better understanding about learning processes in reading and math. In future editions of the PAL-II, improvements to the development of test items and subtests are recommended to improve the reliability of weaker subtests, and further development of the math test, both in the research base and intervention materials, would be valuable to both researchers and school practitioners.
The level of object-permanence was assessed for 16 infants (age, 7 months) when three types of stimuli were used. A familiar object (a toy brought from home), novel objects, and a significant object (baby's bottle) were used to examine how each object type would affect the infants' levels of search. Results indicated that when presented with a novel object, 7-month-old infants, as a group, would exhibit significantly higher levels of search behavior than they would for either the significant or the familiar stimuli. The babies' bottle elicited significantly higher levels of search from all infants than did a familiar object. In three cases it was observed that the significant object elicited a higher level of search than did novel objects. Definitions of novelty and significance are examined. Implications of the findings in terms of assessment are discussed.
Explanatory style is a cognitive personality variable with diverse correlates reflecting good versus bad adaptation. It is usually measured with the Attributional Style Questionnaire (ASQ), but existing versions of this instrument can be difficult for research participants to complete without close supervision. We describe a new version of the ASQ and its use in a mail survey of 146 college students. Results support its efficiency, reliability, and validity. A satisfactory response rate of 70%o was attained. Very few items were omitted among the questionnaires returned (1.3%). Subscale reliabilities were satisfactory (alphas > .70), and the new ASQ correlated with reports of depressive symptoms (rs > .28), suggesting its appropriateness for general use with adults, including survey research. Peer Reviewed http://deepblue.lib.umich.edu/bitstream/2027.42/68618/2/10.1177_073428299601400201.pdf
The purpose of this study was to explore the relationship of three subskills associated with word decoding. The skills utilized for this study were phonological, rapid automatized naming (RAN), and orthographic processing. To do this, six separate models were utilized to define different ways that these three subskills (represented as factors) related to one another, with the goal of finding which model provided the best prediction of word decoding. A sample of 100 subjects from the PAIRW normative sample was used for this study. Results of structural equation modeling, utilizing the AMOS 4.0 program, revealed that using all three subskills concurrently provided the best-fitting model. Contrary to previous research, orthographic, rather than phonological, processing skills were found to be the best predictor of word decoding. RAN was found to be the second best predictor, but only indirectly through the Phonological and Orthographic factors. Moreover, when RAN was utilized as a predictor of orthographic and phonological processing, it provided a better-fitting model than when orthographic and phonological processing were used as predictors of RAN. Utilizing RAN as a predictor of both phonological and orthographic processing was found to provide a better-fitting model than when RAN was used to predict either the Phonological or Orthographic factor alone. The relevance of utilizing all three subskills in psychoeducational assessment is discussed, as well as implications for future research.
Self and others' perceptions of victimization, bullying, and academic competence were examined in relation to self-reported anxiety, depression, anger, and global self-worth in a non-clinical sample of second- and third-grade children. Previous studies document links between negative emotions and self-perceptions that are less favorable than others' perceptions. However, the current study suggests that the impact of discrepant self-other-perceptions (in bullying, victimization, and academic competence) on emotions is complex, sometimes involving interactions between perceptions of self and other informants. (Contains 4 tables and 3 figures.)
The Comprehensive Trail Making Test (CTMT) is designed to be used in neuropsychological assessment for the purposes of detecting effects of brain defects and deficits and in tracking progress in rehabilitation. More specific purposes include the detection of frontal lobe deficits, problems with psychomotor speed, visual search and sequencing, attention, and impairments in set-shifting. Trail-making tasks used as measures of brain function did not originate with this particular instrument. The original trail-making instrument, the Trail Making Test, Parts A and B, was developed in 1938 by Partington as a measure of divided attention (see Partington & Leiter, 1949). It has been found to be a very useful measure of brain function; however, it does have its shortcomings. Normative tables for the original are insufficient and not representative of the current U.S. population. It is also thought to be too brief and too general. The CTMT was developed to overcome these limitations. The CTMT is made up of a standardized set of five "visual search and sequencing tasks" that are influenced by attention, concentration, resistance to distraction, and cognitive flexibility. These tasks are referred to as trail making. The test is standardized for use with individuals aged 11 through 74. This article discusses the technical adequacy of CTMT.
The present study compared IQs and Verbal-Performance IQ discrepancies estimated from two seven-subtest short forms of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) in a sample of 100 subjects referred for neuropsychological assessment. The short forms of Warrington, James, and Maciejewski (1986) and Ward (1990) yielded similar correlation coefficients and absolute error rates with respect to WAIS-R IQs, although the Warrington short form requires more time to administer and score. Both short forms were able to detect significant Verbal-Performance IQ discrepancies 70% of the time. However, they incorrectly yielded significant discrepancies for approximately 25% of the sample who did not have significant differences on the full WAIS-R. The results do not support reporting and interpreting significant Verbal-Performance IQ discrepancies estimated from these short forms.
In this study, the question was raised how basic cognitive processes are related to math abilities and how it can be best determined which children are at risk for developing those disabilities. The role of four distinct basic processes in the development of early mathematics was investigated: executive functions, fluid intelligence, subitizing, and language. The counting skills of 115 five- and six-year-old children were also assessed. The results showed that both executive functions and number sense were important factors in children's development of counting skills. Both executive functions and subitizing explained a significant part of variance in children's counting skills. IQ scores could not add further explanation to the variance in early math. The implications of this study are that it seems promising to use the concept of executive functions for the early identification of children at risk for math learning difficulties.
The Behavior Rating Inventory of Executive Function-Self-Report version (BRIEF-SR) is the first self-report measure of executive functioning for adolescents. With the Individuals With Disabilities Education Improvement Act authorization, there is a greater need for appropriate assessment of severely impaired children. Recent studies have demonstrated the importance of executive functioning as a component of a complete evaluation (D'Amato, Fletcher-Janzen, & Reynolds, 2005). The BRIEF-SR aids in the diagnosis and treatment of problems related to executive functioning. Due to the brief nature of the form, it can be administered without adding significant time to the assessment process and should take about 15 minutes to complete. In fact, the self-report nature of this measure allows for the form to be completed away from a typical testing setting. The structure of the test allows for the collection of valuable information in a short period of time. This article provides a general description of BRIEF-SR, its technical adequacy, and an evaluation of the test.
The psychometric properties of the Dutch Teacher's Report Form (TRF) for teachers of Unaccompanied Refugee Minors (URM) were evaluated in this study. The teachers (n = 486) that participated received a Dutch TRF to report on the mental health of the unaccompanied minor. Hierarchical confirmative factor analysis and individual confirmatory factor analyses support the a priori structure of the Dutch TRF. However, the Thought Problems subscale could not be verified in this study, suggesting that some of the problem behavior reported by teachers of URM differs from that of parent reports or that the item constellation of the Dutch TRF is different for teachers of URM. The total Internalizing and Externalizing scales show good internal consistency. The construct and concurrent validity of the Dutch TRF were found to be acceptable. The results suggest that the Dutch TRF is a reliable and valid instrument to assess emotional and behavior problems of URM.
The Rosenberg Self-Esteem Scale was administered with a 1–4, 1–5, or 0–100 scale to 819 participants, to compare score interpretations across the different versions. A rating scale utility analysis revealed that the categories in the 101-point scale were used inconsistently; based on the analysis, adjacent categories were collapsed resulting in a 7-point scale with almost identical psychometric properties as the original. The interpretations based on the 101-point scale could lead to misinterpretations when compared with the 4- and 5-point versions.
According to (inter)national policy and curriculum documents, the acquisition of research skills is an important objective of secondary education. However, the conceptualization and hence the operationalization of this concept seems ambiguous. Furthermore, no test exists to assess students’ proficiency in (a broad range of) research skills in a 11th- and 12th-grade behavioral sciences classroom context. This article first elaborates on what constitutes research skills in this educational context. Second, the development and testing process of the Leuven Research Skills Test (LRST) is described. Third, the psychometric properties and the dimensional structure of the LRST are presented, based on a large-scale sample (n = 405) of Belgian students in 11th and 12th grade. The results revealed that (a) the LRST is an internal consistent instrument and that (b) a hierarchical model with eight subordinate factors and one single uniting upper level factor appears to be the best fit to the data (in comparison with a unidimensional model and an eight-factor multidimensional model). It is argued that the LRST can be used to assess (individual differences in) overall research skills proficiency and to investigate the effect of particular interventions to foster research skills in future studies.
This study validated the four mathematics tests of the Spanish version of the Woodcock-Johnson III (WJ-III) Achievement (ACH) battery for use in the first six grades of school in Spain. Developmental effects and gender differences were also examined. Participants were a normal population sample of 424 (216 boys) children aged 6 to 13 years. Results showed that the tests have good test-retest and internal reliability and good construct and criterion-related validity. Significant main effects of schooling were obtained with scores increasing across the six school grades, but scores between fourth and fifth graders did not differ significantly. Overall, boys scored higher than girls on all tests but the effect sizes of these gender differences were small (d <= .12).
In the Response to Intervention framework, a psychometrically sound screening tool is essential for identification of children with emotional and behavioral risk. The purpose of this study was to examine the validity of the Pediatric Symptom Checklist–17 (PSC-17) screener in school-based settings. Forty-four teachers rated 738 preschoolers using the PSC-17; children were later assessed using long forms of the Behavior Assessment System for Children (BASC-2) Preschool form or the Achenbach System of Empirically Based Assessment (ASEBA) Caregiver–Teacher Report Form to identify emotional and behavioral disorder. Validity evidence including examinations of a multilevel factor structure, internal consistency, and criterion-related validity supported the conclusion that the PSC-17 is a high-quality universal screening tool in school-based settings. Finally, to identify emotional and behavioral risk with young children, receiver operating characteristic curve analyses with the PSC-17 yielded a lower cutoff score (i.e., 7) than the original cutoff score (i.e., 15) based on a clinical sample.
This study investigated the effect of item position on descriptive statistics, psychometric information, and factor structure of the Pediatric Symptoms Checklist 17-item social-emotional screening instrument (PSC-17). The goal was to determine whether item position, either grouped by factor or mixed across constructs, produced similar results. Descriptive statistics, reliability estimates, and model-data fit were similar across the two versions of the screener. Factor invariance tests supported strict invariance across the two versions, and very small differences between latent means for the three factors measured by the PSC-17. Both forms are equivalent for use with screening activities.
This study analyzes the dimensionality, reliability, metric invariance, and convergent validity of the Schoolwork Engagement Inventory (SEI) in secondary education. Participants in the study were 679 students in compulsory and post-compulsory secondary education in a large city in Eastern Spain during the 2014-2015 academic year. Confirmatory factor analysis (CFA) showed that the one-factor model of the SEI is superior to the other alternative models considered, providing an overall schoolwork engagement score in this educational stage. Reliability was also adequate and the instrument was found to be invariant by gender and educational level. Latent mean comparisons revealed a significant decline in schoolwork engagement throughout secondary compulsory education, and returning to higher levels in post-compulsory secondary education. A CFA also showed SEI’s convergent validity with self-regulated learning, given its direct relationship with metacognitive strategies and self-efficacy for learning, and its inverse relationship with test anxiety.
We tested the reliability and construct validity of a Chinese translation of one of the forms of the Behavior Assessment System for Children (3rd Edition)–Self-Report of Personality for adolescent 11- to 21-year olds (BASC-3-SRP-A). The BASC-3-SRP-A yields 16 subscales that form four composite scales. Data were obtained from 444 12- to 14-year olds (Group 1; girls: 59.0%) and 759 15- to 18-year olds (Group 2; girls: 69.2%). For both groups, Cronbach’s alphas for most of the 16 subscales were higher than .70 and were higher than .83 for the four composite scales. Confirmatory factor analysis (CFA) using Mplus 8 on the combined sample yielded support for the construct validity of the Chinese translation of the BASC-3-SRP-A. Findings support the use of BASC-3-SRP-A among Chinese youth.
A cross-national study was conducted on a new test anxiety measure, the Test Anxiety Measure for College Students-Short Form (TAMC-SF) in a sample of 1,023 Singapore and U.S. students, aged 18-26. The TAMC-SF consists of one facilitating anxiety scale and five test anxiety (Worry, Cognitive Interference, Social Concerns, Physiological Hyperarousal, and Task Irrelevant Behaviors) scales. The measure was administered to the sample of higher education students online. The results of single-group confirmatory factor analyses found support for the TAMC-SF six-factor model for Singapore students, U.S. students, male students, and female students. In addition, the results of multi-group, mean and covariance structure analysis found support for the construct equivalency of the TAMC-SF scores across country and gender. Latent mean factor analyses followed and the results of these analyses indicated Singapore students had significantly higher levels of social concerns and significantly lower levels of cognitive interference and worry than U.S. students. The findings also indicated females had significantly higher levels of test anxiety than males on all five TAMC-SF test anxiety scales. Evidence supporting the construct validity of the TAMC-SF scores with the scores of math anxiety, social phobia, and self-critical perfectionism was also reported. Implications of the study’s findings for researchers and clinicians are discussed.
We examined students’ perceptions of mattering during the pandemic in relation to in-person versus online learning in a sample of 6578 Canadian students in Grades 4–12. We found that elementary school students who attended school in-person reported mattering the most, followed by secondary school students who learned part-time in-person and the rest of the time online (blended learning group). The students who felt that they mattered the least were those who learned online full-time during the pandemic (elementary and secondary students). These results were not driven by a selection effect for school choice during the pandemic—our experimental design showed that students’ perceptions of mattering did not differ by current learning modality when they were asked to reflect on their experiences before the pandemic even though some were also learning online full-time at the time they responded to our questions. No gender differences were found. As a validity check, we examined if mattering was correlated with school climate, as it has in past research. Results were similar in that a modest association between mattering and positive school climate was found in both experimental conditions. The results of this brief study show that in-person learning seems to help convey to students that they matter. This is important to know because students who feel like they matter are more protected, resilient, and engaged. Accordingly, mattering is a key educational indicator that ought to be considered when contemplating the merits of remote learning.
The factor structure of the Teacher–Child Rating Scale (T-CRS 2.1) was examined using confirmatory factor analysis (CFA). A cross-sectional study was carried out on 68,497 children in prekindergarten through Grade 10. Item reduction was carried out based on modification indices, standardized residual covariance, and standardized factor loadings. A higher order model with a general super-ordinate factor fit the data well, and is consistent with the notion of a unidimensional non-cognitive set of learning-related skills. Model-based reliability estimates are provided.