## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Fluency tasks are among the most common item formats for the assessment of certain cognitive abilities, such as verbal fluency or divergent thinking. A typical approach to the psychometric modeling of such tasks (e.g., Intelligence, 2016, 57, 25) is the Rasch Poisson Counts Model (RPCM; Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research, 1960), in which, similarly to the assumption of (essential) ‐equivalence in Classical Test Theory, tasks have equal discriminations—meaning that, beyond varying in difficulty, they do not vary in how strongly they are related to the latent variable. In this research, we question this assumption in the case of divergent thinking tasks, and propose instead to use a more flexible 2‐Parameter Poisson Counts Model (2PPCM), which allows to characterize tasks by both difficulty and discrimination. We further propose a Bifactor 2PPCM (B2PPCM) to account for local dependencies (i.e., specific/nuisance factors) emerging from tasks sharing similarities (e.g., similar prompts and domains). We reanalyze a divergent thinking dataset (Psychology of Aesthetics, Creativity, and the Arts, 2008, 2, 68) and find the B2PPCM to significantly outperform the 2PPCM, both outperforming the RPCM. Further extensions and applications of these models are discussed.

To read the full-text of this research,

you can request a copy directly from the authors.

... As the name suggests, it is a one-parameter IRT model for count data. Several different types of psychometric tests generate count data, in the context of IRT models historically most prominently reading tests (Rasch, 1960;Verhelst & Kamphuis, 2009) (i.e., reading errors are counted), but other examples include but are not limited to processing speed tasks (Baghaei, Ravand, & Nadri, 2019;Doebler & Holling, 2016), language tests in the form of C-tests (Forthmann, Grotjahn, Doebler, & Baghaei, 2020), intelligence tests (Ogasawara, 1996), generally verbal fluency tasks and relatedly fluency measurement in divergent thinking tasks (Forthmann, Holling, Ç elik, Storme, & Lubart, 2017;Forthmann, Ç elik, Holling, Storme, & Lubart, 2018;Myszkowski & Storme, 2021). Additional examples are discussed in Baghaei and Doebler (2019) and Forthmann, Gühne, and Doebler (2020). ...

... A variety of different estimation methods and estimation related extensions have been developed for the RPCM (e.g., Jansen, 1995Jansen, , 1997Jansen & van Duijn, 1992;Ogasawara, 1996;Verhelst & Kamphuis, 2009). For all of them though, the RPCM assumes a test's items to be equally discriminant of the underlying latent ability, and as in the binary case, this assumption may likely be violated by data based on tests which have not been explicitly constructed to satisfy it (Myszkowski & Storme, 2021). Further, it might be of interest to examine and compare the importance of the items (Myszkowski & Storme, 2021). ...

... For all of them though, the RPCM assumes a test's items to be equally discriminant of the underlying latent ability, and as in the binary case, this assumption may likely be violated by data based on tests which have not been explicitly constructed to satisfy it (Myszkowski & Storme, 2021). Further, it might be of interest to examine and compare the importance of the items (Myszkowski & Storme, 2021). For any test, it is a least desirable to be able to test that assumption. ...

Several psychometric tests generate count data, e.g. the number of ideas in divergent thinkingtasks. The most prominent count data IRT model, the Rasch Poisson Counts Model (RPCM)assumes constant discriminations across items as well as the equidispersion assumption of thePoisson distribution (i.e., E(X) = Var(X)), considerably limiting modeling flexibility. Violationsof these assumptions are associated with impaired ability, reliability, and standard error estimates.Models have been proposed to loose the one or the other assumption. The Two-Parameter PoissonCounts Model (2PPCM) allows varying discriminations but retains the equidispersion assumption.The Conway-Maxwell-Poisson Counts Model (CMPCM) that allows for modeling equi- but alsoover- and underdispersion (more or less variance than implied by the mean under the Poisson distribution)but assumes constant discriminations. The present work introduces the Two-ParameterConway-Maxwell-Poisson (2PCMP) model which generalizes the RPCM, the 2PPCM, and the CMPCM(all contained as special cases) to allow for varying discriminations and dispersions withinone model. A marginal maximum likelihood method based on a fixed quadrature Expectation-Maximization (EM) algorithm is derived. Standard errors as well as two methods for latent abilityestimation are provided. An implementation of the 2PCMP model in R and C++ is provided. Twosimulation studies examine the model’s statistical properties and compare the 2PCMP model toestablished methods. Data from divergent thinking tasks are re-analyzed with the 2PCMP modelto illustrate the model’s flexibility and ability to test assumptions of special cases.

... , M } for a test with M items) depending on latent abilities θ ∈ R N and item parameters ζ j . While recent advances have increased the applicability of count data IRT with several generalizations of established models (e.g., Beisemann, 2021;Forthmann, Gühne, & Doebler, 2020;Myszkowski & Storme, 2021), such work is limited to plain count IRT models and does not extend to explanatory IRT models. However, explanatory count IRT models -albeit so far having received comparatively little attention -play a crucial role in the investigation of sources for differences in item properties and latent abilities. ...

... Some subsequent work extended the RPCM while retaining the equidispersion assumption (Jansen, 1994;Jansen & van Duijn, 1992;Verhelst & Kamphuis, 2009), while others generalized the RPCM to allow for overdispersed conditional responses (i.e., the conditional variance exceeds the conditional mean; e.g., Hung, 2012;Mutz & Daniel, 2018). Other authors studied two-dimensional or multidimensional latent variables (Wedel, Böckenholt, & Kamakura, 2003;Forthmann, Çelik, Holling, Storme, & Lubart, 2018;Myszkowski & Storme, 2021) or replaced log-linearity with a sigmoid link function (Doebler, Doebler, & Holling, 2014). But for a long time, underdispersed conditional responses (i.e., the conditional variance is smaller than the conditional mean) could not be accounted for with count IRT models, despite empirical evidence, especially from real test data with highly structured test materials (Doebler & Holling, 2016;Forthmann, Gühne, & Doebler, 2020;Forthmann & Doebler, 2021). ...

In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch’s Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection, but has received little attention. We derive the item information in the 2PCMPM and introduce two 2PCMPM based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for person covariates. Estimation methods are provided and satisfactory statistical properties observed in simulations. Two examples illustrate how the models help understanding tests and underlying constructs.

... An essential element of creative thinking is the capacity for idea generation which comes up with a variety of original ideas in answer to a challenge. Previous study supported that techniques such as brainstorming, mind mapping and scamper can be used to encourage divergent thinking and generate novel solutions (Fauziah et al., 2020;Myszkowski & Storme, 2021). Creating solutions is the third stage of the CPS model. ...

In the 21st century, innovation and creativity are becoming increasingly important for success in both academic and professional settings. To promote innovation and creativity, it is essential to prioritize the development of problem-solving skills among learners. This study aims to analyze problem-solving skills in promoting innovation and creativity and provides a key theoretical model for developing these skills. The method used in this research is a Systematic Literature Review. Researchers collected journal articles from Google Scholar, Research Gate, SINTA, Scopus, and Web of Science. The result found that problem-solving skills enable learners to analyze complex problems, develop creative solutions, and implement those solutions effectively. One key theoretical model that supports problem-solving skills is the Creative Problem Solving (CPS) model. The CPS model consists of six stages: understanding the problem, generating ideas, developing solutions, planning for action, taking action, and evaluating results. Prioritizing problem-solving skills among learners has been linked to academic achievement, success in the workforce and higher levels of innovation and creativity.

... η 2 p = .00. Since Fluency measures tend to follow Poisson distributions(Myszkowski & Storme, 2021), we confirmed these results using a generalized linear model (model for count data; see OSF files). ...

Emergency situations are generally described as combining both threat and time pressure. Creative solutions to deal with such situations are important. The present studies (Ntotal = 1190) investigated how people are able to produce creative solutions in an emergency. Our first study was correlational, and assessed individual creativity and reactions to emergency situations using self-report questionnaires. It was complemented by three experimental studies. In those, critical features of emergency situations were manipulated (i.e., time pressure and/or threat level) to examine their putative impact on individual performance on creative tasks (Alternate Uses Task and Real Life Problem). Three dependent variables systematically qualified individuals’ creative performance: fluency (i.e., the number of ideas proposed), originality (i.e., the average rarity of the ideas proposed), and originality adjusted for fluency (i.e., the rarity of the most original idea proposed). Taken together, the results observed tend to indicate that increasing emergency (i.e., increasing time constraint or threat importance) produced an average reduction in the originality of the ideas proposed. These results complement previously obtained results about the effect of stressful situations on creativity through the distinction made in this paper between two key components of emergency situations, namely time pressure and threat level.

Several psychometric tests and self-reports generate count data (e.g., divergent thinking tasks). The most prominent count data item response theory model, the Rasch Poisson Counts Model (RPCM), is limited in applicability by two restrictive assumptions: equal item discriminations and equidispersion (conditional mean equal to conditional variance). Violations of these assumptions lead to impaired reliability and standard error estimates. Previous work generalized the RPCM but maintained some limitations. The two-parameter Poisson counts model allows for varying discriminations but retains the equidispersion assumption. The Conway-Maxwell-Poisson Counts Model allows for modelling over- and underdispersion (conditional mean less than and greater than conditional variance, respectively) but still assumes constant discriminations. The present work introduces the Two-Parameter Conway-Maxwell-Poisson (2PCMP) model which generalizes these three models to allow for varying discriminations and dispersions within one model, helping to better accommodate data from count data tests and self-reports. A marginal maximum likelihood method based on the EM algorithm is derived. An implementation of the 2PCMP model in R and C++ is provided. Two simulation studies examine the model's statistical properties and compare the 2PCMP model to established models. Data from divergent thinking tasks are reanalysed with the 2PCMP model to illustrate the model's flexibility and ability to test assumptions of special cases.

Raven’s Standard Progressive Matrices (SPM) test and related matrix-based tests are widely applied measures of cognitive ability. Using Bayesian Item Response Theory (IRT) models, I reanalyzed data of an SPM short form proposed by Myszkowski and Storme (2018) and, at the same time, illustrate the application of these models. Results indicate that a three-parameter logistic (3PL) model is sufficient to describe participants dichotomous responses (correct vs. incorrect) while persons’ ability parameters are quite robust across IRT models of varying complexity. These conclusions are in line with the original results of Myszkowski and Storme (2018). Using Bayesian as opposed to frequentist IRT models offered advantages in the estimation of more complex (i.e., 3–4PL) IRT models and provided more sensible and robust uncertainty estimates.

Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample (N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.

Despite six decades of creative cognition research, measures of creative ideation have heavily relied on divergent thinking tasks, which still suffer from conceptual, design, and psychometric shortcomings. These shortcomings have greatly impeded the accurate study of creative ideation, its dynamics, development, and integration as part of a comprehensive psychological assessment. After a brief overview of the historical and current anchoring of creative ideation measurement, overlooked challenges in its most common operationalization (i.e., divergent thinking tasks framework) are discussed. They include (1) the reliance on a single stimulus as a starting point of the creative ideation process (stimulus-dependency), (2) the analysis of response quality based on a varying number of observations across test-takers (fluency-dependency), and (3) the production of “static” cumulative performance indicators. Inspired from an emerging line of work from the field of cognitive neuroscience of creativity, this paper introduces a new assessment framework referred to as “Multi-Trial Creative Ideation” (MTCI). This framework shifts the current measurement paradigm by (1) offering a variety of stimuli presented in a well-defined set of ideation “trials,” (2) reinterprets the concept of ideational fluency using a time-analysis of idea generation, and (3) captures individual dynamics in the ideation process (e.g., modeling the effort-time required to reach a response of maximal uncommonness) while controlling for stimulus-specific sources of variation. Advantages of the MTCI framework over the classic divergent thinking paradigm are discussed in light of current directions in the field of creativity research.

Recent studies have highlighted both similarities and differences between the cognitive processing that underpins memory retrieval and that which underpins creative thinking. To date, studies have focused more heavily on the Alternative Uses task, but fewer studies have investigated the processing underpinning other idea generation tasks. This study examines both Alternative Uses and Consequences idea generation with a methods pulled from cognitive psychology, and a novel method for evaluating the creativity of such responses. Participants were recruited from Amazon Mechanical Turk using a custom interface allowing for requisite experimental control. Results showed that both Alternative Uses and Consequences generation are well approximated by an exponential cumulative response time model, consistent with studies of memory retrieval. Participants were also slower to generate their first consequence compared with first responses to Alternative Uses, but inter-response time was negatively related to pairwise similarity on both tasks. Finally, the serial order effect is exhibited for both tasks, with Consequences earning more creative evaluations than Uses. The results have implications for burgeoning neuroscience research on creative thinking, and suggestions are made for future areas of inquiry. In addition, the experimental apparatus described provides an equitable way for researchers to obtain good quality cognitive data for divergent thinking tasks.

The Rasch Poisson Counts Model is the oldest Rasch model developed by the Danish mathematician Georg Rasch in 1952. Nevertheless, the model has had limited applications in psychoeducational assessment. With the rise of neurocognitive and psychomotor testing, there is more room for new applications of the model where other item response theory models cannot be applied. In this paper, we give a general introduction to the Rasch Poisson Counts Model and then using data of an attention test walk the reader through how to use the "lme4" package in R to estimate the model and interpret the outputs.

The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions and link functions are supported, allowing users to fit – among others – linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multilevel context. Further modeling options include autocorrelation of the response variable, user defined covariance structures, censored data, as well as meta-analytic standard errors. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. In addition, model fit can easily be assessed and compared with the Watanabe-Akaike information criterion and leave-one-out cross-validation.

The Cronbach's alpha is the most widely used method for estimating internal consistency reliability. This procedure has proved very resistant to the passage of time, even if its limitations are well documented and although there are better options as omega coefficient or the different versions of glb, with obvious advantages especially for applied research in which the ítems differ in quality or have skewed distributions. In this paper, using Monte Carlo simulation, the performance of these reliability coefficients under a one-dimensional model is evaluated in terms of skewness and no tau-equivalence. The results show that omega coefficient is always better choice than alpha and in the presence of skew items is preferable to use omega glb coefficients even in small samples.
http://journal.frontiersin.org/article/10.3389/fpsyg.2016.00769/abstract

A new family of item response theory models for count data, based on item characteristic curves (ICCs) of binary models, is presented. These models assume a Poisson distribution for the observed scores where the mean is given by the product of a speed parameter and an ICC, for example, the curve of the one- or two-parameter logistic model. Joint and marginal maximum likelihood parameter estimations are discussed and the proposed procedures are evaluated by computer simulation. As an application, item level data from a test measuring processing speed are analyzed and item fit and test information are explored.

Divergent thinking (DT) tests are among the most popular techniques for measuring creativity. However, the validity evidence for DT tests, as applied in educational settings, is inconsistent partly due to different scoring methods. This study explored the reliability and validity issues of various techniques for administering and scoring two DT tests. Results show distinct differences among several methods for scoring these DT tests and suggest that the percentage scoring method (i.e., dividing originality scores by fluency scores) may be the most appropriate scoring strategy. The potential impact on educational research and practice is discussed in detail.

This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n = 82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.

Maximum likelihood or restricted maximum likelihood (REML) estimates of the
parameters in linear mixed-effects models can be determined using the lmer
function in the lme4 package for R. As for most model-fitting functions in R,
the model is described in an lmer call by a formula, in this case including
both fixed- and random-effects terms. The formula and data together determine a
numerical representation of the model from which the profiled deviance or the
profiled REML criterion can be evaluated as a function of some of the model
parameters. The appropriate criterion is optimized, using one of the
constrained optimization functions in R, to provide the parameter estimates. We
describe the structure of the model, the steps in evaluating the profiled
deviance or REML criterion, and the structure of classes or types that
represents such a model. Sufficient detail is included to allow specialization
of these structures by users who wish to write functions to fit specialized
linear mixed models, such as models incorporating pedigrees or smoothing
splines, that are not easily expressible in the formula language used by lmer.

Structural equation modeling (SEM) is a vast field and widely used by many applied researchers in the social and behavioral sciences. Over the years, many software pack-ages for structural equation modeling have been developed, both free and commercial. However, perhaps the best state-of-the-art software packages in this field are still closed-source and/or commercial. The R package lavaan has been developed to provide applied researchers, teachers, and statisticians, a free, fully open-source, but commercial-quality package for latent variable modeling. This paper explains the aims behind the develop-ment of the package, gives an overview of its most important features, and provides some examples to illustrate how lavaan works in practice.

Comments on an article by Baer (see record 2011-26328-002). Baer challenged the validity of the Torrance Tests of Creative Thinking (TTCT), but the evidence that Baer cited is largely irrelevant, having—at best—a tangential connection to the TTCT. The TTCT should not be criticized with evidence derived from other divergent thinking tests. The author gives multiple examples proving the validity of the TTCT. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Creativity assessment commonly uses open-ended divergent thinking tasks. The typical methods for scoring these tasks (uniqueness scoring and subjective ratings) are time-intensive, however, so it is impractical for researchers to include divergent thinking as an ancillary construct. The present research evaluated snapshot scoring of divergent thinking tasks, in which the set of responses receives a single holistic rating. We compared snapshot scoring to top-two scoring, a time-intensive, detailed scoring method. A sample of college students (n=226) completed divergent thinking tasks and measures of personality and art expertise. Top-two scoring had larger effect sizes, but snapshot scoring performed well overall. Snapshot scoring thus appears promising as a quick and simple approach to assessing creativity.

Generalized linear item response theory is discussed, which is based on the following assumptions: (1) A distribution of the response occurs according to given item format; (2) the item responses are explained by 1 continuous or nominal latent variable and
p latent as well as observed variables that are continuous or nominal; (3) the responses to the different items of a test are independently distributed given the values of the explanatory variables; and (4) a monotone differentiable function
g of the expected item response τ is needed such that a linear combination of the explanatory variables is a predictor of
g(τ). It is shown that most of the well-known psychometric models are special cases of the generalized theory and that concepts such as differential item functioning, specific objectivity, reliability, and information can be subsumed under the generalized theory. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Divergent thinking is central to the study of individual differences in creativity, but the traditional scoring systems (assigning points for infrequent responses and summing the points) face well-known problems. After critically reviewing past scoring methods, this article describes a new approach to assessing divergent thinking and appraises its reliability and validity. In our new Top 2 scoring method, participants complete a divergent thinking task and then circle the 2 responses that they think are their most creative responses. Raters then evaluate the responses on a 5-point scale. Regarding reliability, a generalizability analysis showed that subjective ratings of unusual-uses tasks and instances tasks yield dependable scores with only 2 or 3 raters. Regarding validity, a latent-variable study (n=226) predicted divergent thinking from the Big Five factors and their higher-order traits (Plasticity and Stability). Over half of the variance in divergent thinking could be explained by dimensions of personality. The article presents instructions for measuring divergent thinking with the new method. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

How well can people judge the creativity of their ideas? The distinction between generating ideas and evaluating ideas appears in many theories of creativity, but the massive literature on generation has overshadowed the question of evaluation. After critically reviewing the notion of accuracy in creativity judgments, this article explores whether (1) people in general are discerning and (2) whether some people are more discerning than others. University students (n = 226) completed four divergent thinking tasks and then decided which responses were their most creative. Judges then rated the creativity of all of the responses. Multilevel latent-variable models found that people's choices strongly agreed with judges' ratings of the responses; overall, people were discerning in their decisions. But some people were more discerning than others: people high in openness to experience, in particular, had stronger agreement between their decisions and the judges' ratings. Creative people are thus doubly skilled: they are better at generating good ideas and at picking their best ideas.

Organizations often look to their information systems (IS) professionals to work with system stakeholders to generate new ideas to solve complex problems and to provide information technology (IT) artifacts to support ideation processes. Much research therefore seeks to increase the number of ideas people generate based on Alex F. Osborn's conjecture that more ideas give rise to more good ideas. Recent research, however, calls the quantity—quality conjecture into question. This paper advances bounded ideation theory (BIT), an explanation for the ideation function—the relationship between the number of good ideas and the number of ideas contributed. BIT posits that boundaries of understanding, attention resources, goal congruence, mental and physical stamina, and the solution space moderate a primary relationship between individual ability and idea quality, yielding an ideation function with an inflected curve. We discuss six strategies for improving ideation and call into question the value of the quantity focus of ideation research in the IS/IT literature, arguing that a quality focus would be more useful.

This paper provides a survey on studies that analyze the macroeconomic effects of intellectual property rights (IPR). The first part of this paper introduces different patent policy instruments and reviews their effects on R&D and economic growth. This part also discusses the distortionary effects and distributional consequences of IPR protection as well as empirical evidence on the effects of patent rights. Then, the second part considers the international aspects of IPR protection. In summary, this paper draws the following conclusions from the literature. Firstly, different patent policy instruments have different effects on R&D and growth. Secondly, there is empirical evidence supporting a positive relationship between IPR protection and innovation, but the evidence is stronger for developed countries than for developing countries. Thirdly, the optimal level of IPR protection should tradeoff the social benefits of enhanced innovation against the social costs of multiple distortions and income inequality. Finally, in an open economy, achieving the globally optimal level of protection requires an international coordination (rather than the harmonization) of IPR protection.

In this paper we discuss a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. A unified maximum likelihood method for estimating the parameters of the generalized latent trait model will be presented. We discuss in addition the scoring of individuals on the latent dimensions. The general framework presented allows, not only the analysis of manifest variables all of one type but also the simultaneous analysis of a collection of variables with different distributions. The approach used analyzes the data as they are by making assumptions about the distribution of the manifest variables directly.

The Akaike information criterion (AIC; Akaike, 1973) is a popular method for comparing the adequacy of multiple, possibly nonnested models. Current practice in cognitive psychology is to accept a single model on the basis of only the "raw" AIC values, making it difficult to unambiguously interpret the observed AIC differences in terms of a continuous measure such as probability. Here we demonstrate that AIC values can be easily transformed to so-called Akaike weights (e.g., Akaike, 1978, 1979; Bozdogan, 1987; Burnham & Anderson, 2002), which can be directly interpreted as conditional probabilities for each model. We show by example how these Akaike weights can greatly facilitate the interpretation of the results of AIC model comparison procedures.

Count data naturally arise in several areas of cognitive ability testing, e.g., processing
speed, memory, verbal fluency, and divergent thinking. Contemporary count data item
response theory models, however, are not flexible enough, especially to account for overand underdispersion at the same time. For example, the Rasch Poisson counts model
assumes equidispersion (conditional mean and variance coincide) which is often violated in empirical data. This work introduces the Conway-Maxwell-Poisson counts model that can handle underdispersion (variance lower than the mean), equidispersion, and overdispersion (variance larger than the mean) in general and specifically at the item level. A simulation study revealed satisfactory parameter recovery at moderate sample sizes and mostly unbiased standard errors for the proposed estimation approach. In addition, plausible empirical reliability estimates resulted, while those based on the Rasch Poisson counts model were biased downwards (underdispersion) and biased upwards (overdispersion) when the simulation model deviated from equidispersion. Finally, verbal fluency data were analyzed and the Conway-Maxwell-Poisson counts model with item-specific dispersion parameters fit the data best. Dispersion parameter estimates indicated underdispersion for three out of four items. Overall, these findings indicate the feasibility and importance of the suggested flexible count data modeling approach.

Item-response theory (IRT) models are test-theoretical models with many practical implications for educational measurement. For example, test-linking procedures and large-scale educational studies often build on IRT frameworks. However, IRT models have been rarely applied to divergent thinking which is one of the most important indicators of creative potential. This is most likely due to the fact that the best-known models, such as the one-parameter logistic Rasch model, can be only used for binary data. But its less known, and often overlooked, predecessor, the Rasch Poisson count model (RPCM), is well suited to model many important divergent-thinking outcomes such as fluency. In the current study we assessed RPCM fit to four different divergent thinking tasks. We further assessed the fit of the data to a two-dimensional variant of the RPCM to take into account construct differences due to verbal and figural task modality. We also compared estimated measurement precision based on the two-dimensional model, two separately estimated modality-specific unidimensional models, and a classic approach. The results indicated that the two-dimensional approach was advantageous, especially when correlations of latent variables are of interest. The RPCM and its more flexible multidimensional variants are discussed as a psychometric tool which possibly directs future research towards a better understanding of all the available divergent-thinking tasks.

Divergent thinking, as a method of examining creative cognition, has not been adequately analyzed in the context of modern cognitive theories. This article casts divergent thinking responding in the context of theories of memory search. First, it was argued that divergent thinking tasks are similar to semantic fluency tasks, but are more constrained, and less well structured. Next, response time distributions from 54 participants were analyzed for temporal and semantic clustering. Participants responded to two prompts from the alternative uses test: uses for a brick and uses for a bottle, for two minutes each. Participants’ cumulative response curves were negatively accelerating, in line with theories of search of associative memory. However, results of analyses of semantic and temporal clustering suggested that clustering is less evident in alternative uses responding compared to semantic fluency tasks. This suggests either that divergent thinking responding does not involve an exhaustive search through a clustered memory trace, but rather that the process is more exploratory, yielding fewer overall responses that tend to drift away from close associates of the divergent thinking prompt.

Semantic distance is a promising automated measure of creativity. However, it is not yet known whether semantic distance can assess creative products that are both novel and appropriate. To isolate novelty and appropriateness, participants were asked to generate a verb in response to a given noun in 3 different ways: (a) generate appropriate but not novel responses (common cue), (b) generate novel but not appropriate responses (random cue), and (c) generate responses that are both novel and appropriate (creative cue). Automated semantic distance scores and subjective ratings of creativity, novelty, and appropriateness were assessed. When participants were explicitly cued to be creative, the increased semantic distance of their responses represented increases in novelty that was constrained by an appropriateness criterion (Experiments 1 and 2). Participants cued to generate random responses had the highest semantic distance scores, but without applying the appropriateness criterion, their creativity scores suffered (Experiments 1 and 2). Additionally, participants appeared to implicitly apply the appropriateness criterion when generating creative responses (Experiment 2). In conclusion, automated measures of semantic distance can assess novel and appropriate creative responses while avoiding the pitfalls inherent to subjective ratings of creativity.

Divergent thinking has often been used as a proxy measure of creative thinking, but this practice lacks a foundation in modern cognitive psychological theory. This article addresses several issues with the classic divergent-thinking methodology and presents a new theoretical and methodological framework for cognitive divergent-thinking studies. A secondary analysis of a large dataset of divergent-thinking responses is presented. Latent semantic analysis was used to examine the potential changes in semantic distance between responses and the concept represented by the divergent-thinking prompt across successive response iterations. The results of linear growth modeling showed that although there is some linear increase in semantic distance across response iterations, participants high in fluid intelligence tended to give more distant initial responses than those with lower fluid intelligence. Additional analyses showed that the semantic distance of responses significantly predicted the average creativity rating given to the response, with significant variation in average levels of creativity across participants. Finally, semantic distance does not seem to be related to participants’ choices of their own most creative responses. Implications for cognitive theories of creativity are discussed, along with the limitations of the methodology and directions for future research.

Purpose – The purpose of this paper is to provide new elements to understand, measure and predict managerial creativity. More specifically, based on new approaches to creative potential (Lubart et al., 2011), this study proposes to distinguish two aspects of managerial creative problem solving: divergent-exploratory thinking, in which managers try to generate several new solutions to a problem; and convergent-integrative thinking, in which managers select and elaborate one creative solution. Design/methodology/approach – In this study, personality is examined as a predictor of managerial creative problem solving: On one hand, based on previous research on general divergent thinking (e.g. Ma, 2009), it is hypothesized that managerial divergent thinking is predicted by high openness to experience and low agreeableness. On the other hand, because efficient people management involves generating satisfying and trustful social interactions, it is hypothesized that convergent- integrative thinking ability is predicted by high agreeableness. In all, 137 adult participants completed two divergent-exploratory thinking managerial tasks and two convergent-integrative thinking managerial task and the Big Five Inventory (John and Srivastava, 1999).
Findings – As expected, divergent-exploratory thinking was predicted by openness to experience (r1⁄40.21; po0.05) and agreeableness (r1⁄4−0.22; po0.05) and the convergent-integrative thinking part of managerial creative problem solving was predicted by agreeableness (r1⁄40.28; po0.001). Originality/value – Contrary to most research on managerial creativity (e.g. Scratchley and Hakstian, 2001), the study focuses (and provides measure guidelines) on both divergent and convergent thinking dimensions of creative potential. This study replicates and extends previous results regarding the link between personality (especially agreeableness) and managerial creativity.

Bifactor latent structures were introduced over 70 years ago, but only recently has bifactor modeling been rediscovered as an effective approach to modeling construct-relevant multidimensionality in a set of ordered categorical item responses. I begin by describing the Schmid-Leiman bifactor procedure (Schmid & Leiman, 1957), and highlight its relations with correlated-factors and second-order exploratory factor models. After describing limitations of the Schmid-Leiman, two newer methods of exploratory bifactor modeling are considered, namely, analytic bifactor (Jennrich & Bentler, 2011) and target bifactor rotations (Reise, Moore, & Maydeu-Olivares, 2011). In section two, I discuss limited and full-information estimation approaches to confirmatory bifactor models that have emerged from the item response theory and factor analysis traditions, respectively. Comparison of the confirmatory bifactor model to alternative nested confirmatory models and establishing parameter invariance for the general factor also are discussed. In the final section, important applications of bifactor models are reviewed. These applications demonstrate that bifactor modeling potentially provides a solid foundation for conceptualizing psychological constructs, constructing measures, and evaluating a measure's psychometric properties. However, some applications of the bifactor model may be limited due to its restrictive assumptions.

The concept of information functions developed for dichotomous item response models is adapted for the partial credit model. The information function is explained in terms of the model parameters and scoring functions. The relationship between the item information function and the expected score function is also discussed. The information function is then used to investigate the effect of collapsing and recoding categories of polytomously‐scored items of the National Assessment of Educational Progress (NAEP). Finally, the NAEP writing items are calibrated and the item and test information is used to discuss desirable properties of polytomous items.
item response model polytomous item response model partial credit model information function NAEP

This study introduces an item response theory—zero-inflated Poisson (IRT—ZIP) model to investigate psychometric properties of multiple items and predict individuals’ latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are included to investigate item performance from both propensity and level perspectives. The application of the model was illustrated by analyzing the substance use data from the National Longitudinal Study of Youth. A simulation study based on the empirical data analysis scenario showed that the item parameters can be recovered accurately and precisely with adequate sample sizes. Limitations and future directions are discussed.

This research monograph on the antecedents and correlates of creativity in school-aged children discusses implications of measures of intelligence versus measures of creativity and attempts an interpretation of the psychological requirements for creative products in children. Harvard Book List (edited) 1971 #624 (PsycINFO Database Record (c) 2012 APA, all rights reserved)

REPORTS ON THE EVOLUTION OF THE STUDY OF CREATIVITY. BEGINNING WITH GALTON'S STUDIES ON THE IMPACT OF HEREDITY UPON GENIUS, GUILFORD POINTS OUT THAT RELATIVELY FEW PSYCHOLOGISTS HAVE TURNED THEIR ATTENTION TO THIS PROBLEM. ONLY THOSE WHO HAVE A PARTICULAR INTEREST IN THE MEASUREMENT OF INTELLECTUAL CAPACITY HAVE BEEN UNABLE TO AVOID CONTACT WITH THE CREATIVE ASPECT OF MAN BUT THE HISTORY OF THE INTELLIGENCE TEST MOVEMENT SHOWS THAT IN ITS EARLY DEVELOPMENT IT HAS BEEN SINGULARLY DEVOID OF CONTACT WITH MEASURES OF INGENUITY, INNOVATIVE CAPACITY, OR INVENTIVENESS. SOME NONPSYCHOLOGICAL ATTEMPTS AT ATTACKING THE PROBLEM OF CREATIVITY ARE DISCUSSED. SINCE 1950 EFFORTS TO ESTABLISH THE NATURE OF CREATIVITY HAVE BEEN SOMEWHAT MORE FRUITFUL AND THE PROMISE OF MORE EFFECTIVE BASIC RESEARCH ON CREATIVE THINKING IS DISCUSSED. (32 REF.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Creativity can be broadly defined as a combination of interacting individual and environmental resources leading to the production of valuable solutions. This paper concentrates on the type of creativity that can be expressed in solving social problems. After reviewing the potentially relevant psychological and contextual variables intervening in social creativity, leading to individual differences in this capacity, we present results of a study testing the nomological validity of social creativity in a group of 70 pre-adolescents. The findings indicate that social creativity performance is linked with socially relevant variables such as social competencies, popularity, and parenting style. Finally, we discuss the relevance of a creativity approach in social domains such as violence prevention programs and education.

How strongly is creativity related to intelligence? Although a large body of work has found a small relationship between them, there are reasons to suspect that their relationship has been underestimated. Most studies have assessed creativity and intelligence with observed scores, not as latent variables, and few studies have examined higher-order latent intelligence factors. A sample of university students (n = 226) completed divergent thinking tasks and measures of fluid reasoning, verbal fluency, and strategy generation. Creativity was modestly related to the three lower-order cognitive factors, but it was substantially related (β = .43) to a higher-order intelligence factor composed of the lower-order factors. This effect declined (β = .26) when openness to experience, a likely confounding variable, was considered.

The model selection literature has been generally poor at reflecting the deep foundations of the Akaike information criterion (AIC) and at making appropriate comparisons to the Bayesian information criterion (BIC). There is a clear philosophy, a sound criterion based in information theory, and a rigorous statistical foundation for AIC. AIC can be justified as Bayesian using a “savvy” prior on models that is a function of sample size and the number of model parameters. Furthermore, BIC can be derived as a non-Bayesian result. Therefore, arguments about using AIC versus BIC for model selection cannot be from a Bayes versus frequentist perspective. The philosophical context of what is assumed about reality, approximating models, and the intent of model-based inference should determine whether AIC or BIC is used. Various facets of such multimodel inference are presented here, particularly methods of model averaging.

Parasite communities are arranged into hierarchical levels of organization, covering various spatial and temporal scales. These range from all parasites within an individual host to all parasites exploiting a host species across its geographic range. This arrangement provides an opportunity for the study of patterns and structuring processes operating at different scales. Across the parasite faunas of various host species, several species-area relationships have been published, emphasizing the key role of factors such as host size or host geographical range in determining parasite species richness. When corrections are made for unequal sampling effort or phylogenetic influences, however, the strength of these relationships is greatly reduced, casting a doubt over their validity. Component parasite communities, or the parasites found in a host population, are subsets of the parasite fauna of the host species. They often form saturated communities, such that their richness is not always a reflection of t

We empirically test existing theories on the provision of public goods, in particular air quality, using data on sulfur dioxide (SO2) concentrations from the Global Environment Monitoring Projects for 107 cities in 42 countries from 1971 to 1996. The results are as follows: First, we provide additional support for the claim that the degree of democracy has an independent positive effect on air quality. Second, we find that among democracies, presidential systems are more conducive to air quality than parliamentary ones. Third, in testing competing claims about the effect of interest groups on public goods provision in democracies we establish that labor union strength contributes to lower environmental quality, whereas the strength of green parties has the opposite effect.

We develop a general class of factor-analytic models for the analysis of multivariate (truncated) count data. Dependencies in multivariate counts are of interest in many applications, but few approaches have been proposed for their analysis. Our model class allows for a variety of distributions of the factors in the exponential family. The proposed framework includes a large number of previously proposed factor and random effect models as special cases and leads to many new models that have not been considered so far. Whereas previously these models were proposed separately as different cases, our framework unifies these models and enables one to study them simultaneously. We estimate the Poisson factor models with the method of simulated maximum likelihood. A Monte-Carlo study investigates the performance of this approach in terms of estimation bias and precision. We illustrate the approach in an analysis of TV channels data.