Article

Computer usage questionnaire: Structure, correlates, and gender differences

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Computer usage, computer experience, computer familiarity, and computer anxiety are often discussed as constructs potentially compromising computer-based ability assessment. After presenting and discussing these constructs and associated measures we introduce a brief new questionnaire assessing computer usage. The self-report measure consists of 18 questions asking for the frequency of different computer activities and software usage. Participants were N = 976 high school students who completed the questionnaire and several covariates. Based on theoretical considerations and data driven adjustments a model with a general computer usage factor and three nested content factors (Office, Internet, and Games) is established for a subsample (n = 379) and cross-validated with the remaining sample (n = 597). Weak measurement invariance across gender groups could be established using multi-group confirmatory factor analysis. Differential relations between the questionnaire factors and self-report scales of computer usage, self-concept, and evaluation are reported separately for females and males. It is concluded that computer usage is distinct from other behavior oriented measurement approaches and that it shows a diverging, gender-specific pattern of relations with fluid and crystallized intelligence.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Therefore, the assumptions of SCT seem to offer a suitable framework in the ICT domain, suggesting that motivational ICT characteristics are important for the acquisition and development of ICT literacy. In contrast to related constructs such as computer usage (e.g., computer usage questionnaire; Schroeders & Wilhelm, 2011;computer use for leisure;OECD, 2011), which are based on inductively derived scales, the ICT motivation scale is based on a psychological model that differentiates between specific usage motives of digital media use in the sense of stable dispositions (LaRose & Eastin, 2004). Thus, it can be hypothesized that the theoretically-derived items and subscales are less influenced by constantly evolving digital technologies and software products (with more and more activities being carried out using computers) than constructs such as computer usage (see Schroeders & Wilhelm, 2011). ...
... In contrast to related constructs such as computer usage (e.g., computer usage questionnaire; Schroeders & Wilhelm, 2011;computer use for leisure;OECD, 2011), which are based on inductively derived scales, the ICT motivation scale is based on a psychological model that differentiates between specific usage motives of digital media use in the sense of stable dispositions (LaRose & Eastin, 2004). Thus, it can be hypothesized that the theoretically-derived items and subscales are less influenced by constantly evolving digital technologies and software products (with more and more activities being carried out using computers) than constructs such as computer usage (see Schroeders & Wilhelm, 2011). ...
... For example, in studies involving older samples (adults ranging in age from 20 to over 70 years), only two ICT usage motive factors were identified because the hedonic and social interaction motive factors could not be separated empirically (e.g., Kalmus et al., 2011;Senkbeil & Ihme, 2017). Thus, it cannot be ruled out that younger or older persons use digital media in different ways and for other purposes (Schroeders & Wilhelm, 2011). Similarly, since the ICT motivation scale was administered only in the German national extension study, the results cannot be generalized to young adolescents in other countries. ...
Article
Full-text available
Although motivational factors play an important role in the development of information and communication technologies (ICT) literacy, only few of the studies that have assessed ICT literacy have also presented a theoretically derived conceptualization of ICT motivation. To address this issue, we examined the psychometric properties of a newly developed ICT motivation scale which distinguishes between several incentives to use ICT. Using the data from the International Computer and Information Literacy Study (ICILS) 2013, confirmatory analysis confirmed the hypothesized higher-order factor structure and at least metric measurement invariance with regard to gender and social background. The scale showed convergent and discriminant validity with ICT behavior, ICT literacy and social background variables. Furthermore, it incrementally predicted ICT literacy over and above intelligence and general interest in ICT.
... This result supports the assumption of the U&G approach that ICT-related activities that are based on instrumental versus hedonic or ritualistic (such as social interaction) usage motivations differ in their cognitive involvement, that is, mental processes of attention, recognition, and elaboration. Additionally, the moderate to large correlations between the ICT motivation dimensions and NFC (j0.23j r j0.51j) indicate one should more closely consider personality variables in future research, in particular those traits that are related to mental effort and achievement (e.g., need for achievement; Schroeders & Wilhelm, 2011). ...
... For example, in studies involving older samples (adults ranging in age from 20 to over 70 years) only two ICT usage motive factors were identified because the hedonic and social interaction motive factors could not be separated empirically (e.g., Kalmus et al., 2011; Senkbeil & Ihme, online first). Thus, it cannot be ruled out that younger or older persons will use digital media in different ways and for other purposes (Schroeders & Wilhelm, 2011). Second, a limitation can be seen in the cross-sectional character of this study, limiting any causal interpretations. ...
... Third, in this study only desktop computers (including laptops and notebooks) were considered. Because they can influence or change ICT usage motivations, future studies should also consider mobile devices (e.g., smartphones) which are increasingly used for online activities (e.g., reading newspaper, social interaction; Schroeders & Wilhelm, 2011). ...
Article
The ability to use information and communication technologies (ICT) is essential for private and vocational participation in society. Although motivational facets play an important role in developing ICT knowledge and skills, only few studies assessing computer-related knowledge and skills present a comprehensive concept on these motivational facets. This article addresses this issue by presenting the construction and first validation of an ICT motivation inventory on the basis of social cognitive theory. The focus of this newly developed concept is to predict computer-related knowledge and skills. Its theoretically deduced dimensions of three ICT-related usage motives (instrumental, hedonic and social interaction), ICT-related selfefficacy, and ICT-related self-regulation were analyzed in a study assessing N = 323 German students between 16 and 27 years of age. Confirmatory factor analyses supported the assumed dimensional structure in the form of a hierarchical five-factor model. The reliability and construct validity of the measure were explored. As expected, the ICT motivation inventory dimensions were related to individual differences in ICT Literacy and relevant person characteristics (social background, need for cognition).
... Im Vergleich zum U&G-Ansatz oder auch verwandten Konstrukten wie der Computernutzung (z. B. Computer Usage Questionnaire, CUQ;Schroeders & Wilhelm, 2011), deren Skalen weitgehend induktiv entwickelt werden, bietet der Fragebogen zudem den Vorteil einer psychologischen Verankerung. Entsprechend sollten die aus der SCT abgeleiteten Anreizfaktoren bzw. ...
... erden, bietet der Fragebogen zudem den Vorteil einer psychologischen Verankerung. Entsprechend sollten die aus der SCT abgeleiteten Anreizfaktoren bzw. Fragebogenitems vergleichsweise invariant gegenüber technologischen Entwicklungen sein oder zumindest in geringerem Umfang als z. B. der CUQ der Gefahr unterliegen, relativ schnell zu veralten (vgl.Schroeders & Wilhelm, 2011). Mögliche Limitationen der vorliegenden Studie beziehen sich auf zwei Aspekte: Zum Ersten ist in zukünftigen Studien stärker als hier die zunehmende Nutzung mobiler Endgeräte (z. B. Tablets, Smartphones) zu berücksichtigen, da sie Nutzungssituationen wie -motive beeinflussen oder verändern können (vanEimeren & Frees, 2012). Zum Zweiten ...
... Vor dem Hintergrund hedonistischer und zweckorientierter Anreizfaktoren bietet es sich demnach an, stärker als bisher Persönlichkeitsmerkmale mit Bezug zur Anstrengungs-und Leistungsbereitschaft, wie dem Bedürfnis nach Leistung, zu berücksichtigen (vgl. hierzu Schroeders & Wilhelm, 2011). ...
Article
Zusammenfassung. Im vorliegenden Beitrag wird die Konstruktion und erste Validierung eines Kurzfragebogens vorgestellt, der auf der Grundlage der sozial-kognitiven Theorie der Internetnutzung computerbezogene Anreizfaktoren bei Erwachsenen erfasst. Der Fragebogen ist fur den Einsatz in Large-Scale-Untersuchungen als Outcome-Variable sowie zur Vorhersage der Computernutzung und computerbezogener Fertigkeiten konzipiert. Die Ergebnisse einer Studie im Rahmen des Nationalen Bildungspanels (N = 462) zeigen, dass das vorgeschlagene Modell zur Erfassung computerbezogener Anreizfaktoren empirisch gestutzt werden kann und der Fragebogen gute psychometrische Eigenschaften besitzt. Uberdies konnte partielle Messinvarianz uber Geschlecht und Alter belegt werden. Aspekte der Konstruktvaliditat wurden uber Zusammenhange mit computerbezogenen Personenmerkmalen (z. B. Fertigkeiten) und Personlichkeitsmerkmalen (z. B. Need for Cognition) uberpruft.
... Computer usage. Participants worked on a modified version of the Computer Usage Questionnaire (Schroeders & Wilhelm, 2011) to assess the frequency of everyday computer activities and the use of certain programs (e.g., "How often do you use an Internet browser?"). All questions were rated on a five-point scale (1 = "never" to 5 = "very often"). ...
... Information for the health-related questions was derived from several text books and exams regarding the vocational education and training of medical assistants. We again used the revised version of the CUQ (Schroeders & Wilhelm, 2011) as a measure of computer usage. ...
Article
The ability to comprehend new information is closely related to the successful acquisition of new knowledge. With the ubiquitous availability of the Internet, the procurement of infor-mation online constitutes a key aspect in education, work, and our leisure time. In order to investigate individual differences in digital literacy, testtakers were presentedwith health-related comprehension problems with task-specific time restrictions. Instead of reading a given text, they were instructed to search the Internet for the information required to answer the questions.We investigated the relationship between this newly developed test and fluid and crystallized intelligence, while controlling for computer usage, in two studies with adults (n1 = 120) and vocational high school students (n2 = 171). Structural equation modeling was used to investigate the amount of unique variance explained by each predictor. In both studies, about 85% of the variance in the digital literacy factor could be explained by reasoning and knowledge while computer usage did not add to the variance explained. In Study 2, prior health-related knowledge was included as a predictor instead of general knowledge.While the influence of fluid intelligence remained significant, prior knowledge strongly influenced digital literacy (β=.81). Together both predictor variables explained digital literacy exhaustively. These findings are in line with the view that knowledge is a major determinant of higher-level cognition. Further implications about the influence of the restrictiveness of the testing environment are discussed.
... From a conceptual point of view, the unidimensional conceptualization is critical, as it does not account for the specific goals of teachers' job performance facilitating teaching and students' learning (Niederhauser & Perkmen, 2010). Since the specific purposes of using ICT for teaching and the specific goals to foster students' learning are multifaceted (e.g., using ICT for assessment, collaboration, feedback, skill development; Proctor & Marks, 2013;Schroeders & Wilhelm, 2011;Terzis & Economides, 2011), one may also conceptualize perceived usefulness as multidimensional. In other words, teachers' perceptions on whether ICT could improve their job performance may not only refer to the use of ICT in general (unidimensionality) but rather to the usefulness of ICT for specific teaching and learning purposes in classrooms (multidimensionality). ...
... Since the use of ICT is multifaceted (e.g., Schroeders & Wilhelm, 2011), ICILS 2013 used a measure that covered multiple aspects such as the use of ICT for assessment and feedback (2 items; u ¼ .65, a ¼ .69), collaboration among students (4 items; u ¼ .75, a ¼ .79), ...
Article
Studies on teachers' acceptance and use of information and communication technology (ICT) have revealed perceived usefulness to be a crucial determinant for integrating ICT in classrooms. In consequence, the present study focuses on teachers' perceived usefulness of ICT for teaching and learning and is aimed at describing its structure and relations to self-efficacy, ICT use, and teachers' age. By means of Bayesian analysis, we specified confirmatory factor-analytic and structural equation models to a large-scale data set of N = 1190 Norwegian teachers. Our results supported the hypothesized four-factor structure of teachers' perceived usefulness of ICT, signifying different facets of ICT-related teaching goals in classrooms. Moreover, it was possible to disentangle general and specific components of the construct in nested factor models. In support of existing research, we found positive relations to self-efficacy and ICT use, but a negative relation to teachers' age. Our study provides evidence on a multidimensional conceptualization of teachers' perceived usefulness of ICT for teaching and learning, and verifies the relations to teacher-related characteristics. Implications for the measurement and modeling of the construct, and future research directions are discussed.
... To assess the potential effect of prior technology skills on the use of CoEd, parents and professionals completed the Computer Usage Questionnaire [56] that consists of a self-reported measure of 18 questions asking for the frequency (5-point scale ranging from never to very often) of different computer activities and software usage. ...
Article
Full-text available
Background An individual education plan (IEP) is a key element in the support of the schooling of children with special educational needs or disabilities. The IEP process requires effective communication and strong partnership between families, school staff, and health care practitioners. However, these stakeholders often report their collaboration as limited and difficult to maintain, leading to difficulties in implementing and monitoring the child’s IEP. Objective This paper aims to describe the study protocol used to evaluate a technological tool (CoEd application) aiming at fostering communication and collaboration between family, school, and health care in the context of inclusive education. Methods This protocol describes a longitudinal, nonrandomized controlled trial, with baseline, 3 month, and 6-month follow-up assessments. The intervention consisted of using the web-based CoEd application for 3 months to 6 months. This application is composed of a child’s file in which stakeholders of the support team can share information about the child’s profile, skills, aids and adaptations, and daily events. The control group is asked to function as usual to support the child in inclusive settings. To be eligible, a support team must be composed of at least two stakeholders, including at least one of the parents. Additionally, the pupil had to be aged between 10 years and 16 years, enrolled in secondary school, be taught in mainstream settings, and have an established or ongoing diagnosis of autism spectrum disorder, attention-deficit/hyperactivity disorder, or intellectual disability (IQ<70). Primary outcome measures cover stakeholders’ relationships, self-efficacy, and attitudes toward inclusive education, while secondary outcome measures are related to stakeholders’ burden and quality of life, as well as children’s school well-being and quality of life. We plan to analyze data using ANCOVA to investigate pre-post and group effects, with a technological skills questionnaire as the covariate. Results After screening for eligibility, 157 participants were recruited in 37 support teams, composed of at least one parent and one professional (school, health care). In September 2023, after the baseline assessment, the remaining 127 participants were allocated to the CoEd intervention (13 teams; n=82) or control condition (11 teams; n=45). Conclusions We expect that the CoEd application will improve the quality of interpersonal relationships in children’s IEP teams (research question [RQ]1), will show benefits for the child (RQ2), and improve the well-being of the child and the stakeholders (RQ3). Thanks to the participatory design, we also expect that the CoEd application will elicit a good user experience (RQ4). The results from this study could have several implications for educational technology research, as it is the first to investigate the impacts of a technological tool on co-educational processes. International Registered Report Identifier (IRRID) DERR1-10.2196/63378
... Since using our software requires a prior computer and web knowledge, we hypothesized that the participants' potential inability to use our system could result in user frustration affecting their perception related to user satisfaction and thus affect their SUS questionnaire results. Therefore, we used the Computer Usage Questionnaire (CUQ) [32] to capture participants' computer skills for consideration when collecting SUS results in order to ensure their reliability. The CUQ is rated on a 5-point scale (never, rarely, sometimes, often, and very often) and questioned participants regarding their frequency of usage of computer applications (e.g. ...
Chapter
Usability studies are a crucial part of developing user-centered designs and they can be conducted using a variety of different methods. Unmoderated usability surveys are more efficient and cost-effective and lend themselves better to larger participant pools in comparison to moderated usability surveys. However, unmoderated usability surveys could increase the collection of unreliable data due to the survey participants’ careless responding (CR). In this study, we compared the remote moderated and remote unmoderated usability testing sessions for a web-based simulation and modeling software. The usability study was conducted with 72 participants who were randomly assigned into a moderated and unmoderated groups. Our results show that moderated sessions produced more reliable data in most of the tested outcomes and that the data from unmoderated sessions needed some optimization in order to filter out unreliable data. We discuss methods to isolate unreliable data and recommend ways of managing it.
... Before the experiments started, each subject was instructed to assess its skills related to the use of computers by filling out the "Computer usage questionnaire" (CUQ) (Schroeders and Wilhelm, 2011). For the statistical analysis, the Friedman test was performed to compare the patterns of computer usages between subjects. ...
Article
Full-text available
Advanced man-machine interfaces (MMIs) are being developed for teleoperating robots at remote and hardly accessible places. Such MMIs make use of a virtual environment and can therefore make the operator immerse him-/herself into the environment of the robot. In this paper, we present our developed MMI for multi-robot control. Our MMI can adapt to changes in task load and task engagement online. Applying our approach of embedded Brain Reading we improve user support and efficiency of interaction. The level of task engagement was inferred from the single-trial detectability of P300-related brain activity that was naturally evoked during interaction. With our approach no secondary task is needed to measure task load. It is based on research results on the single-stimulus paradigm, distribution of brain resources and its effect on the P300 event-related component. It further considers effects of the modulation caused by a delayed reaction time on the P300 component evoked by complex responses to task-relevant messages. We prove our concept using single-trial based machine learning analysis, analysis of averaged event-related potentials and behavioral analysis. As main results we show (1) a significant improvement of runtime needed to perform the interaction tasks compared to a setting in which all subjects could easily perform the tasks. We show that (2) the single-trial detectability of the event-related potential P300 can be used to measure the changes in task load and task engagement during complex interaction while also being sensitive to the level of experience of the operator and (3) can be used to adapt the MMI individually to the different needs of users without increasing total workload. Our online adaptation of the proposed MMI is based on a continuous supervision of the operator's cognitive resources by means of embedded Brain Reading. Operators with different qualifications or capabilities receive only as many tasks as they can perform to avoid mental overload as well as mental underload.
... Towards Computers questionnaire (Jay & Willis, 1992), changed so every reference to "computer" was replaced with "video game", was given to assess attitudes and experience with video games. Following this was the Computer Usage Questionnaire assessing a participant's use of various computer software (Schroeders & Wilhelm, 2011). The Positive and Negative Affect Schedule (PANAS) assessed levels of positive and negative affect over the participant's previous 24 hours (Watson, Clark, & Tellegen, 1988). ...
Thesis
Full-text available
For many, listening to music is an enjoyable experience pursued throughout one’s lifetime. Nearly 200 years of music psychology research has revealed the various ways music listening can impact human emotional states, as well as cognitive and motor performance. Music in video games has come a long way from the first chiptunes of 1978 to the full scores written specifically for games today. However, very little is understood of how background game music impacts game performance, behavior and experience. Even less is known for how music variables might affect performance, behavior and experience by individual differences, such as personality type. In this study, 78 participants scoring in the top 30% for their age range of either extraversion or introversion played a cognitive-training game in four music conditions (silence, low tempo, medium tempo, and high tempo). Performance, game play behavior, and flow experience scores were analyzed for each music condition by level of extraversion. While no statistically significant differences were found in game performance scores by level of extraversion, there were statistically significant differences found for play behavior (physical mouse motions) and flow experience for the music conditions. These results suggest that music can both alter the nature of physical game inputs and also provide a more engaging game experience, while not necessarily impacting one’s ability to perform in a game.
... There has been great emphasis on students' digital competence and several frameworks have proposed this competence as being multifaceted, distinguishing between different facets such as accessing, evaluating, sharing & communicating information (Ferrari, 2013). Moreover, research has focused on teachers' ICT integration for teaching and learning in the context of technology acceptance (Davis, 1989; Teo, 2011), measuring teachers' ICT integration on the basis of frequency reports (e.g., Donnelly et al., 2011; Schroeders & Wilhelm, 2011). These frequency reports most often referred to general descriptions of the software being used (e.g., How often do you use word processing programs in the classroom?) ...
... There has been great emphasis on students' digital competence and several frameworks have proposed this competence as being multifaceted, distinguishing between different facets such as accessing, evaluating, sharing & communicating information (Ferrari, 2013). Moreover, research has focused on teachers' ICT integration for teaching and learning in the context of technology acceptance (Davis, 1989;Teo, 2011), measuring teachers' ICT integration on the basis of frequency reports (e.g., Donnelly et al., 2011;Schroeders & Wilhelm, 2011). These frequency reports most often referred to general descriptions of the software being used (e.g., How often do you use word processing programs in the classroom?) ...
Conference Paper
Full-text available
Teachers’ integration of information and communication technology (ICT) has been widely studied, given that digital competence has become crucial in 21st century education. In this context, teachers’ ICT integration is mostly represented by quantitative measures describing the frequency of ICT use in classrooms without examining the degree to which digital information and communication skills are emphasized. Consequently, the present study investigates teachers’ emphasis on developing students’ digital skills, focusing on accessing, evaluating, sharing & communicating digital information. The aim of our study is to validate an assessment of the construct with respect to its factor structure, relations to other constructs (e.g., teachers’ ICT self-efficacy, ICT use), and the differences between gender and main subject groups. We used a representative sample of 1,072 Norwegian teachers that participated in the International Computer and Information Literacy Study (ICILS) in 2013. We show that teachers’ emphasis: (a) comprises three correlated factors which are identified by exploratory structural equation modeling; (b) is positively related to teachers’ ICT self-efficacy and the frequency of ICT use; (c) differs across teachers’ main subject but not across gender groups. Our results provide strong evidence on the construct validity and point out the importance of teachers’ emphasis on fostering students’ digital skills in 21st century classrooms.
... Furthermore, researchers focused on the analysis of the relationships between CPS and covariates. In many studies, constructs such as intelligence, prior knowledge, motivation, selfconcept, and computer familiarity were predictors of CPS performance (e.g., Bühner, Kröner & Ziegler, 2008;Funke & Frensch, 2007;Hambrick, 2005;Lee et al., 1996;Schoppek & Putz-Osterloh, 2003;Schroeders & Wilhelm, 2011). Additionally, Köller et al. (2006) argued that self-concept, school grades in prior classes, and the participation in advanced courses influenced performances in competence tests. ...
Article
The ability to solve complex and real-life problems is one of the key competencies in science education. Different studies analyzed the relationships between complex problem solving (CPS) and covariates such as intelligence, prior knowledge, and motivational constructs on a manifest level. Additionally, research findings indicate that intelligence and prior knowledge are substantial predictors of CPS. Due to the interconnections between covariates, the relationships between CPS and covariates are quite complex. Therefore, we propose a model which describes these relations by taking direct and indirect effects into account. All analyses are based on structural equation modeling. Results show that the proposed model represents the data with substantial goodness-of-fit statistics and explanation of variance. Intelligence, domain-specific prior knowledge, computer familiarity, and attendance in advanced chemistry courses are direct predictors of CPS, while interest and scientific self-concept show indirect effects.
... In contrast, the influence of computer familiarity might be due to the computer-based assessment procedure. (e.g., Schroeders & Wilhelm, 2011a) We argue that interactivity and complexity are the key features of the virtual laboratory and require a certain level of computer usage. Additionally, a meaningful application of prior knowledge within a virtual problem-solving environment improves students' performances on problem solving tasks. ...
Article
Full-text available
The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper, therefore, presents a theoretical framework, which describes the relationship between the components of problem solving and strategy knowledge. In order to assess the constructs, we developed a virtual environment which allows students to solve interactive and static problems. 162 students of grade 10 and the upper secondary level completed the tests within a cross-sectional survey. In order to investigate the structure of problem-solving competency, we established measurement models representing different theoretical assumptions, and evaluated model fit statistics by using confirmatory factor analyses. Results show that problem-solving competency in virtual environments comprises to three correlated abilities: achieving a goal state, systematical handling of variables, and solving analytical tasks. Furthermore, our study provides empirical evidence on the distinction between analytical and complex problem solving. Additionally, we found significant differences between students of grades 10 and 12 within the problem-solving subscales, which could be explained by gaming experience and prior knowledge. These findings are discussed from a measurement perspective. Implications for assessing complex problem solving are given.
Article
Full-text available
Toward the age of ambient intelligence, in which contactless devices are widely applied to recognize human states. This study aims at designing critical motion features to build artificial intelligence (AI) models for identifying user activities in front of the computer. Eight participants were recruited in the study to perform four daily computer activities, including playing games, surfing the web, typing words, and watching videos. While performing the experimental tasks, the participants’ upper body were videotaped, and the recorded videos were processed to obtain four designed features, comprising (1) the eye-opening size, (2) the mouth-opening size, (3) the number of optical-fence pixels, and (4) the standard deviation of optical-fence pixels. After feature importance confirmation, these obtained motion features were used to establish three recurrent neural network (RNN) models using simple RNN, gated recurrent unit (GRU), and long short-term memory (LSTM). The comparison of the model predictions showed that the GRU model had the best performance (accuracy = 76%), compared to the Simple RNN model (accuracy = 59%) and the LSTM model (accuracy = 70%). This study showed that the four tested computer activities had significant effects on the four designed features, and hence the features could be applied to build AI models for recognizing activities in front of a computer. Limitations are discussed for directing future studies in extending the methodology to other applications.
Conference Paper
The purpose of this paper is to describe the expert systems of mental resources assessment using different methods of self-evaluation of hierarchical structure of individuality such as nomothetic, ideographic, and ideo-dynamic diagnostics. These expert systems were designed on the basis of INT-Test Design Software. The nomothetic method requires a big sample of participants and permits us to obtain only an averaged, statistical pattern but not a structure of individual mental resources of a person. The ideographic method is a study of a single person, in our case it is a modification of the nomothetic method due to the extension of a rank scale by including additional qualitative estimates (an opened scale method). The ideo-dynamic method describes an internal organization of mental resources of a single person by pairwise comparison procedure. The data obtained on the same person in different expert systems of individual mental resources assessment revealed the highest level of test-retest reliability for the ideo-dynamic method. The lowest level was for the nomothetic method.
Article
Working safely and successfully in highly automated human-machine interfaces of future aviation is not only a matter of performance, but also of personality. This study examines which personality aspects correlate with safety-critical performance in human-machine teams. The research tools HTQ (Hybrid Team Questionnaire) and HINT (Hybrid Interaction Scenario) were combined for a comprehensive exploratory study. The HTQ includes personality scales measuring broad factors of personality (Big Five) as well as more specific scales and was added with objective personality assessments to measure risk taking. The simulation tool HINT simulates relevant processes in future human-machine team interaction in aviation. In a study with 156 applicants for aviation careers, safety-critical relations of some facets of general personality as well as risk taking were found. Especially personality aspects concerning disinhibiting, spontaneous behaviour and sensation seeking show correlations with poorer performance in the HINT simulation.
Conference Paper
Research indicates that the facial expressions of animated characters and agents can influence people's perceptions and interactions with these entities. We designed an experiment to examine how an interactive animated avatar's facial expressiveness influences dyadic conversations between adults and the avatar. We animated the avatar in realtime using the tracked facial motion of a confederate. To adjust facial expressiveness, we damped and exaggerated the avatar's facial motion. We found that ratings of the avatar's extroversion were positively related to its expressiveness. However, impressions of the avatar's realism and naturalness worsened with increased expressiveness. We also found that the confederate was more influential when she appeared as the damped or exaggerated avatar. Adjusting the expressiveness of interactive animated avatars may be a simple way to influence people's social judgments and willingness to collaborate with animated avatars. These results have implications for using avatar facial expressiveness to improve the effectiveness of avatars in various contexts. Adjusting the expressiveness of interactive animated avatars may be a simple way to influence people's social judgments and willingness to collaborate with animated avatars.
Article
Full-text available
What makes someone "good with technology," or else "technologically illiterate"? The study of implicit theories has previously demonstrated that people develop ideas about the malleability of their own abilities in a number of domains, including intelligence, athletics, gaming and using technology. Some think of their abilities as fixed and unchangeable (entity theorists), while others think of them as adaptable through workand/or experience (incremental theorists). These beliefs influence people's goal outcomes. In this study, we examine implicit theories as a possible contributor to people's performance using modern technologies. We use an adapted instrument to measure implicit theories of technology, followed by an ecologically valid technological task. People with incremental theories showed better performance than those with entity theories. We discuss implications of this research for practitioners, as well as avenues for future research.
Chapter
Full-text available
In keeping with the goals of this volume, we explore the various uses and ad-vantages of mean and covariance structures (MACS) models for examining the effects of ecological/contextual influences in developmental research. After ad-dressing critical measurement and estimation issues in MACS modeling, we discuss their uses in two general ways. First, we focus primarily on discrete ecological factors as grouping factors for examining main-effect as well as mod-erating or interactive influences. Second, we briefly discuss the simplest case — including ecological factors as within-group direct and mediated effects — because these types of effects are covered in more detail elsewhere in this vol-ume (see Little, Card, Bovaird, Preacher, & Crandell, chap. 9, this volume; see also McKinnon, in press). Our focus in this chapter will be on how such effects might be moderated by the discrete contextual factor(s) used to define groups. Discrete ecological factors can be conceptualized at various levels, from macrosystems such as sociocultural contexts to exosystem structures such as neighborhoods and communities. Other discrete ecological factors such as devel-opmental level, ethnicity, and gender are particularly amenable to study using 121
Article
Full-text available
The purpose of this study was to systematically develop an instrument to measure computer aversion, computer attitudes, and computer familiarity. The study is an extension of previous research (Schulenberg, 2002). Development involved item generation, pilot testing, and exploratory and confirmatory factor analyses. The measure was administered to psychology students drawn from two universities (N = 854; N = 400, respectively). The three hypothesized factors emerged, as well as an additional computer aversion factor. The measure possesses good content validity and factorial validity, as well as solid internal consistency reliability. Implications of this study, considerations, and directions for future research are discussed.
Article
Full-text available
Computer‐based tests administered in established commercial testing centers typically have used monitors of uniform size running at a set resolution. Web‐based delivery of tests promises to greatly expand access, but at the price of less standardization in equipment. The current study evaluated the effects of variations in screen size, screen resolution, and presentation delay on verbal and mathematics scores in a sample of 357 college‐bound high school juniors. The students were randomly assigned to one of six experimental conditions—three screen display conditions crossed with two presentation rate conditions. The three display conditions were: a 17‐inch monitor set to a resolution of 1024 × 768, a 17‐inch monitor set to a resolution of 640 × 480, and a 15‐inch monitor set to a resolution of 640 × 480. Items were presented either with no delay or with a five‐second delay between questions (to emulate a slow Internet connection). No significant effects on math scores were found. Verbal scores were higher, by about a quarter of a standard deviation (28 points on the SAT ® scale), with the high‐resolution display.
Article
Full-text available
Measurement invariance (MI) has been developed in a very technical language and manner that is generally not widely accessible to social and behavioral researchers and applied measurement specialists. Primarily relying on the widely know n concepts in regression and linear statistical modeling, this paper decoded the concept of MI in the context of factor analysis. The paper began by describing what is MI (and lack of MI) and how the concept can be realized in the context of factor analysis. Next, we explained the need for modeling the mean and covariance structure (MACS), instead of the traditionally applied covariance structure, in detecting factorial invariance. Along the way, we addressed the related matter of statistically testing for MI using the Chi-squared likelihood ratio test and fit indices in multi-group MACS confirmatory factor analysis. Bringing to bear current developments by Cheung and Rensvold (2002) and others, we provided an update on the practice of using change in fit statistics to test for MI. Throughout the paper we concretized our discussion, without lack of generality to other constructs an d research settings, with an example of 21 cross-country MI comparisons of the 1999 TIMSS mathematics scores.
Article
Full-text available
Computer-based assessment (CBA) is yet to have a significant impact on high-stakes educational assessment, but the equivalence between CBA and paper-and-pencil (P&P) test scores will become a central concern in education as CBA increases. It is argued that as CBA and P&P tests provide test takers with qualitatively different experiences, the impact of individual differences on the testing experience, and so statistical equivalence of scores, needs to be considered. As studies of score equivalence have largely ignored individual differences such as computer experience, computer anxiety and computer attitudes, the purpose of this paper is to highlight the potential effects of these. It is concluded that each of these areas is of significance to the study of equivalence and that the often inconsistent findings result from the rapid changes in exposure to technology.
Article
Full-text available
In this study we report results from a meta-analysis of relationships between computer anxiety and its three correlates—age, gender, and computer experience. Only studies published between 1990 and 1996 were included in the analysis. Findings of this meta-analysis are: (1) female university undergraduates are generally more anxious than male undergraduates, but the strength of this relationship is not conclusive; (2) instruments measuring computer anxiety are generally reliable, but not compatible with one another; and (3) computer anxiety is inversely related to computer experience, but the strength of this relationship remains inconclusive. Limitations of the methodology and implications of the findings are discussed. Directions for future studies are suggested.
Article
Full-text available
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model sayM 0 implies on a less restricted oneM 1. IfT 0 andT 1 denote the goodness-of-fit test statistics associated toM 0 andM 1, respectively, then typically the differenceT d =T 0−T 1 is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the modelsM 0 andM 1. As in the case of the goodness-of-fit test, it is of interest to scale the statisticT d in order to improve its chi-square approximation in realistic, that is, nonasymptotic and nonormal, applications. In a recent paper, Satorra (2000) shows that the difference between two SB scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are not available in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of modelsM 0 andM 1. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Article
Full-text available
The 23 factors previously identified as representing primary mental abilities and 8 factors previously defined as general personality dimensions were factored, using a sample of 297 adults, to provide evidence for hypotheses stipulating that general visualization, fluency, and speediness functions, as well as fluid and crystallized intelligence functions, are involved in the performances commonly said to indicate intelligence. 9 principal axes factors were sufficient to account for the observed, generally positive, intercorrelations among the 31 primary factors. These were rotated blindly to oblique simple structure. The resulting structure was consistent with predictions based upon refinements of the general theory of fluid and crystallized intelligence. Positive manifold for the intercorrelations among the 2nd-order factors was interpreted as indicating a social fact of interdependence between intraperson and environmental influences determining behavioral attributes. (30 ref.)
Chapter
Because of the prevalence of both nonnormal and categorical data in empirical research, this chapter focuses on issues surrounding the use of data with these characteristics. Specifically, we review the assumptions underlying NT estimators. We describe nonnormal and categorical data and review robustness studies of the most popular NT estimator, maximum likelihood (ML), in order to understand the consequences of violating these assumptions. Most importantly, we discuss three popular strategies often used to accommodate nonnormal and/or categorical data in SEM: 1. Weighted least squares (WLS) estimation, 2. Satorra-Bentler (S-B) scaled χ² and robust standard errors, and 3. Robust diagonally weighted least squares (DWLS) estimation. For each strategy, we present the following: (a) a description of the strategy, (b) a summary of research concerning the robustness of the χ²-statistic, other fit indices, parameter estimates, and standard errors, and (c) a description of implementation across three software programs.
Article
A survey was conducted among 346 children from the 7th and 8th grade of 7 elementary schools to examine possible positive and negative effects of playing videogames. Analyses revealed that playing videogames did not appear to take place at the expense of children's other leisure activities, social integration, and school performance. A gender difference arose: Boys spent more time playing videogames than did girls. There was no significant relationship between the amount of time children spent on videogames and aggressive behavior. A negative relationship between time spent playing videogames and prosocial behavior was found; however, this relationship did not appear in separate analyses for boys and girls. Furthermore, a positive relationship was found between time spent on videogames and a child's intelligence.
Article
Model Notation, Covariances, and Path Analysis. Causality and Causal Models. Structural Equation Models with Observed Variables. The Consequences of Measurement Error. Measurement Models: The Relation Between Latent and Observed Variables. Confirmatory Factor Analysis. The General Model, Part I: Latent Variable and Measurement Models Combined. The General Model, Part II: Extensions. Appendices. Distribution Theory. References. Index.
Article
A survey was conducted among 346 children from the 7th and 8th grade of 7 elementary schools to examine possible positive and negative effects of playing videogames. Analyses revealed that playing videogames did not appear to take place at the expense of children's other leisure activities, social integration, and school performance. A gender difference arose: Boys spent more time playing videogames than did girls. There was no significant relationship between the amount of time children spent on videogames and aggressive behavior. A negative relationship between time spent playing videogames and prosocial behavior was found; however, this relationship did not appear in separate analyses for boys and girls. Furthermore, a positive relationship was found between time spent on videogames and a child's intelligence.
Article
A method for automated parameter estimation and testing of fit of nonstandard models for mean vectors and covariance matrices is described. Nonlinear equality and inequality constraints on the parameters of the model are allowed for. All the user will need to provide are subroutines to evaluate the mean vector and covariance matrix according to the model and, if required, the constraint functions. Subroutines for derivatives need not be provided. Some applications are described.
Article
In addition to the potential that computer-based testing (CBT) offers, empirical evidence has found that identical computerized and paper-and-pencil tests have not produced equivalent test-taker performance. Referred to as the "mode effect," previous literature has identified many factors that may be responsible for such differential performance. The aim of this review was to explore these factors, which typically fit into two categories, participant and technological issues, and highlight their potential impact on performance.
Article
Measurement invariance is usually tested using Multigroup Confirmatory Factor Analysis, which examines the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model. Although many studies have examined the properties of GFI as indicators of overall model fit for single-group data, there have been none to date that examine how GFIs change when between-group constraints are added to a measurement model. The lack of a consensus about what constitutes significant GFI differences places limits on measurement invariance testing. We examine 20 GFIs based on the minimum fit function. A simulation under the two-group situation was used to examine changes in the GFIs (ΔGFIs) when invariance constraints were added. Based on the results, we recommend using Δcomparative fit index, ΔGamma hat, and ΔMcDonald's Noncentrality Index to evaluate measurement invariance. These three ΔGFIs are independent of both model complexity and sample size, and are not correlated with the overall fit measures. We propose critical values of these ΔGFIs that indicate measurement invariance.
Article
Among the outstanding contributions of the book are (1) the judgments of the relative excellence of assorted tests in some 70 fields of accomplishment, by Kelley, Franzen, Freeman, McCall, Otis, Trabue and Van Wagenen; (2) detailed and exact information on the statistical and other characteristics of the same tests, based on a questionnaire addressed to the text authors or (in the absence of reply) estimates by Kelley on the best data available; (3) a chapter of 47 pages condensing all the principal elementary statistical methods. In addition, there is constant emphasis upon the importance of the probable error, with some illustrative applications; for example, it is maintained that about 90% of the abilities measured by our best "intelligence" and "achievement" tests are (due chiefly to the size of the probable errors) the same ability. A chapter sets forth the analytical procedures which lead to this conclusion and to four others earlier enunciated. "Idiosyncrasy," or inequality among abilities, which the author regards as highly valuable, is considered in two chapters; the remainder of the volume is devoted to a historical sketch of the mental test movement and a statement of the purposes of tests, the latter being illustrated by appropriate chapters. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This article examines the adequacy of the “rules of thumb” conventional cutoff criteria and several new alternatives for various fit indexes used to evaluate model fit in practice. Using a 2‐index presentation strategy, which includes using the maximum likelihood (ML)‐based standardized root mean squared residual (SRMR) and supplementing it with either Tucker‐Lewis Index (TLI), Bollen's (1989) Fit Index (BL89), Relative Noncentrality Index (RNI), Comparative Fit Index (CFI), Gamma Hat, McDonald's Centrality Index (Mc), or root mean squared error of approximation (RMSEA), various combinations of cutoff values from selected ranges of cutoff criteria for the ML‐based SRMR and a given supplemental fit index were used to calculate rejection rates for various types of true‐population and misspecified models; that is, models with misspecified factor covariance(s) and models with misspecified factor loading(s). The results suggest that, for the ML method, a cutoff value close to .95 for TLI, BL89, CFI, RNI, and Gamma Hat; a cutoff value close to .90 for Mc; a cutoff value close to .08 for SRMR; and a cutoff value close to .06 for RMSEA are needed before we can conclude that there is a relatively good fit between the hypothesized model and the observed data. Furthermore, the 2‐index presentation strategy is required to reject reasonable proportions of various types of true‐population and misspecified models. Finally, using the proposed cutoff criteria, the ML‐based TLI, Mc, and RMSEA tend to overreject true‐population models at small sample size and thus are less preferable when sample size is small.
Chapter
This chapter contains section titled:
Chapter
Analysis of Ordinal Categorical Data Alan Agresti Statistical Science Now has its first coordinated manual of methods for analyzing ordered categorical data. This book discusses specialized models that, unlike standard methods underlying nominal categorical data, efficiently use the information on ordering. It begins with an introduction to basic descriptive and inferential methods for categorical data, and then gives thorough coverage of the most current developments, such as loglinear and logit models for ordinal data. Special emphasis is placed on interpretation and application of methods and contains an integrated comparison of the available strategies for analyzing ordinal data. This is a case study work with illuminating examples taken from across the wide spectrum of ordinal categorical applications. 1984 (0 471-89055-3) 287 pp. Regression Diagnostics Identifying Influential Data and Sources of Collinearity David A. Belsley, Edwin Kuh and Roy E. Welsch This book provides the practicing statistician and econometrician with new tools for assessing the quality and reliability of regression estimates. Diagnostic techniques are developed that aid in the systematic location of data points that are either unusual or inordinately influential; measure the presence and intensity of collinear relations among the regression data and help to identify the variables involved in each; and pinpoint the estimated coefficients that are potentially most adversely affected. The primary emphasis of these contributions is on diagnostics, but suggestions for remedial action are given and illustrated. 1980 (0 471-05856-4) 292 pp. Applied Regression Analysis Second Edition Norman Draper and Harry Smith Featuring a significant expansion of material reflecting recent advances, here is a complete and up-to-date introduction to the fundamentals of regression analysis, focusing on understanding the latest concepts and applications of these methods. The authors thoroughly explore the fitting and checking of both linear and nonlinear regression models, using small or large data sets and pocket or high-speed computing equipment. Features added to this Second Edition include the practical implications of linear regression; the Durbin-Watson test for serial correlation; families of transformations; inverse, ridge, latent root and robust regression; and nonlinear growth models. Includes many new exercises and worked examples.
Article
After reviewing past approaches towards measuring computer experience, the development and pilot test (N=279) of the Computer Understanding and Experience (CUE) Scale is described. Results suggest that the CUE Scale provides an internally consistent, self-report measure which may be subdivided into two related subscales. Support for the construct validity of the CUE Scale is also provided.
Article
The objective of this article is to present a review and discussion of scales and questionnaires developed to assess attitudes towards computers. Each review includes descriptions of the scale, scale development procedures, and reliability and validity testing. Also, general problems associated with the reliability and validity testing of the scales are presented along with an assessment of future directions of computer attitude research.
Article
The objective of this article is to present a review and discussion of scales and questionnaires developed to assess computer anxiety. Included are descriptions of the scales, scale development procedures, and reliability and validity testing. Research questions generated and examined with the scales are also included. Finally, problems with reliability and validity testing are presented along with an assessment of future directions of computer anxiety research.
Article
We describe mathematical and statistical models for factor invariance. We demonstrate that factor invariance is a condition of measurement invariance. In any study of change (as over age) measurement invariance is necessary for valid inference and interpretation. Two important forms of factorial invariance are distinguished: "configural" and "metric". Tests for factorial invariance and the range of tests from strong to weak are illustrated with multiple group factor and structural equation modeling analyses (with programs such as LISREL, COSAN, and RAM). The tests are for models of the organization and age changes of intellectual abilities. The models are derived from current theory of fluid (Gf) and crystallized (Gc) abilities. The models are made manifest with measurements of the WAIS-R in the standardization sample. Although this is a methodological paper, the key issues and major principles and conclusions are presented in basic English, devoid of technical details and obscure notation. Conceptual principles of multivariate methods of data analysis are presented in terms of substantive issues of importance for the science of the psychology of aging.
Issues in computerized ability measurement: Getting out of the jingle and jangle jungle The transition to computer-based assessment JRC Scientific and Technical Reports
  • O Wilhelm
Wilhelm, O. (2009). Issues in computerized ability measurement: Getting out of the jingle and jangle jungle. In F. Scheuermann & J. Björnsson (Eds.), The transition to computer-based assessment (pp. 145–150). JRC Scientific and Technical Reports. <http://crell.jrc.it/RP/reporttransition.pdf> Retrieved 01.03.10..
BEFKI. Berliner Test zur Erfassung fluider und kristalliner Intelligenz (Berlin test of fluid and crystallized intelligence) Unpublished manuscript
  • O Wilhelm
  • U Schroeders
  • S Schipolowski
Wilhelm, O., Schroeders, U., & Schipolowski, S. (2009). BEFKI. Berliner Test zur Erfassung fluider und kristalliner Intelligenz (Berlin test of fluid and crystallized intelligence). Unpublished manuscript.