Content uploaded by Trevor E. Hudson
Author content
All content in this area was uploaded by Trevor E. Hudson on May 01, 2020
Content may be subject to copyright.
Assessment & Development Matters Vol. 8 No. 1 Summer 2015 2
Autumn 2019 – Vol. 11 No. 3
ISSN 2040-4069
In this edition, we feature articles on:
nLeadership
nDiversity
nOrganisational wisdom
nGood practice
ADM
Assessment & Development Matters
Approachable Psychometrics
We offer HR professionals the potential
to make employee selection and
development as efficient, objective, and
reliable as possible.
By using our growing selection of
occupational assessments and verified
training courses, businesses can identify
new potential and improve existing
resources.
Our tests include:
• NEO Personality Inventory
• Leadership Judgement Indicator
• The Dark Triad of Personality at Work
• Creative Response Evaluation - Work
• The Positivity Test
Explore our full range of psychometric
tests at www.hogrefe.co.uk or contact us
to discuss your assessment needs.
Hogrefe Ltd
Tel. +44 (0)1865 797920
customersupport@hogrefe.co.uk
@hogrefeltd
/company/hogrefe-ltd
/hogrefeltd
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 1
Editorial
WELCOME to a rich and diverse Autumn 2019 edition of Assessment and Development
Matters. We begin our musings with an article by Bywater and Lewis on the
challenges presented to leaders by an uncertain and volatile world. That’s followed
by Shalfrooshan discussing how a new psychometric instrument can help an ethnically
diverse organisation to develop intercultural skills among its workforce. Educational
assessments are a tricky process, hedged around by legislation and registration, and
Cochrane’s helpful article explores how to maintain good practice in this area.
We follow this with an exploration of organisational wisdom – a new concept to some,
maybe? Cynicism aside, Hudson’s article gives us much to ponder, before we move on to
a helpful article by Aspin, describing her experiences going through the grandparenting
process for registration in forensic testing. Although this specic route to the RQTU is
now closed (would-be registrants now go through the standard qualication procedure)
we hope that a new area for the RQTU, offering registration for those working in the
health and social care arena, will be coming soon. Julie’s article shows how a task which
seems initially daunting becomes much more accessible as you work through it.
Our next article is more or less a reprint of a statement available to all of you on
the Psychological Testing Centre website – but it’s so important that we felt it should be
reproduced here too. Essentially, it’s about when or under what circumstances you can
(or more importantly can’t) disclose the results of tests, given ethical practice, copyright
regulations, and GDPR. Following that, Hugh McCredie has produced a further exploration
of psychological pioneers, as he discusses the work of those attempting to identify higher-
order factors of intelligence. And we end by introducing a new series in which experienced
psychologists are invited to muse on the changes they have seen during the period they have
been practising, beginning with Thomson on educational testing.
Add to all that, of course, our usual sprinkling of other items, including Jo Horne’s
regular update on the psychological tests which have successfully passed through the Test
Review process. I hope you enjoy it all.
Nicky Hayes
Senior Editor, Assessment & Development Matters
Leadership: what competencies does it
take to remain engaged as a leader in a
VUCA world?
Copyright for published material rests with the British Psychological Society unless specically stated otherwise.
As the Society is a party to the Copyright Licensing Agency (CLA) agreement, articles published in Assessment &
Development Matters may be copied by libraries and other organisations under the terms of their own CLA licences
(www.cla.co.uk). Permission must be obtained from the British Psychological Society for any other use beyond fair
dealing authorised by copyright legislation. For further information about copyright and obtaining permissions,
go to www.bps.org.uk/permissions or e-mail permissions@bps.org.uk.
Assessment & Development Matters features a wide range of articles on
educational, forensic and occupational testing and brings practitioners the
latest news and perspectives on assessment and development. If you would
like to submit articles, the submission guidelines can be found here:
https://ptc.bps.org.uk/information-and-resources/assessment-and-
development-matters-adm/submit-article
The views expressed in the following articles are those
of the individual contributors and do not represent the
views of the British Psychological Society or the editors.
2 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Leadership: What competencies does it
take to remain engaged as a leader in a
VUCA world?
James Bywater & James Lewis
Key digested message
What does it take to cope in a VUCA world? This research extends the work published
in 2018 to examine which behavioural competencies enhance the engagement
levels of leaders in periods of high change. It uses substantial samples of real leaders
occupying senior jobs in large organisations and measures personal engagement
levels as the dependent variable. It seeks to answer the question ‘Who thrives as a
leader under varying degrees of change and uncertainty?’ This is important because
if a leader becomes disengaged there is little prospect of them being able to lead
others through organisational change.
Introduction
LEADERSHIP is the art of mobilising others to want to struggle for the shared
aspirations. (Kouzes & Posner, 1995). A key task within this is to lead an organisation
through change. Lewin famously described this process as ‘unfreeze, move, refreeze’.
However some more modern writers have identied that a rapidly changing context
rarely gives rise to the opportunity to refreeze. In a VUCA world (Volatile, Uncertain,
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 3
Chaotic, Ambiguous) it is argued, change becomes constant. Although not everyone likes
the term (Briner, 2015) there has been some research into the traits and competencies
to deal with this.
Learning Agility is one example of an important skill in this context. It is dened as
‘the ability and willingness to learn from experience, and subsequently apply that learning
to perform successfully under new or rst-time conditions’ (Lombardo & Eichinger, 2000;
Korn Ferry, 2017b). Similarly, the HERO model (Luthans et al., 2007) of ‘Psychological
Capital’ postulates some of the key attributes needed to cope with change and there has
been some research into the psychological Traits and Drivers needed to measure these
(Bywater & Lewis, 2017).
Leaders need to cope with this VUCA environment at a personal level, but they also
need to guide and navigate a team or an organisation through this transition. Engagement
is dened as the level of commitment that employees have towards the organisation, as
well as their willingness to contribute discretionary effort and go the extra mile (Royal &
Agnew, 2011). The engagement level of leaders is important because they play a critical
‘sense-making’ role in times of change, helping employees understand new developments
in the organisation and the implications for their teams and job responsibilities.
The competency movement dates back at least as far as McClelland (1973.) It focuses
upon the skills and behaviours associated with success (Lombardo & Eichinger, 2009)
and has been an important currency to use in HR for describing superior performance.
Aberdeen Group (2007), show that competency models continue to be important for
aligning with business objectives, increasing workforce nimbleness and identifying and
retention of top talent, among others.
Numerous generic leadership competency models exist, however there has been little
empirical work on the behavioural competencies needed to thrive and cope in a VUCA
world. The models that have been put forward for dealing with change have included:
LIVED Model (Hughes, 2015), Contextual intelligence (Kutz & Bamford-Wade, 2013),
Vertical leadership Competencies (Petrie, 2015) and Cognitive Readiness (Bawany, 2016).
One of the more thoroughly explained examples comes from Joiner & Josephs (2006)
and identies 4 competencies or ‘Leadership Agilities’ that they see as critical to cope in
a VUCA world:
Context-setting agility improves your ability to scan your environment, frame the
initiatives you need to take, and clarify the outcomes you need to achieve.
Stakeholder agility increases your ability to engage with key stakeholders in ways that
build support for your initiative.
Creative agility enables you to transform the problems you encounter into the results
you need.
Self-leadership agility is the ability to use your initiatives as opportunities to develop
into the kind of leader you want to be.
However, one limitation of all of these competency models is that they appear to be
rationally derived, with little empirical backing for how they have been chosen for their
relevance in a VUCA world.
Objective
This study evaluates the Leadership Agility model of leadership competencies across a
variety of change environments to examine which behavioural competencies are critical for
predicting the personal engagement level of leaders undergoing different amounts of change.
4 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Design
This was a eld-based study of senior managers employed in private sector roles. All
participants completed an item response theory (IRT) based competency measurement
questionnaire as part of high stakes executive assessments across a wide sample of
organisations.
Method
A convenience sample of 27,699 respondents, assessed using a FC-IRT scored personality
inventory (Korn Ferry, 2017a) delivered online in a supervised environment, usually as
part of an Executive assessment. These were extracted from the databases of organisations
using the questionnaire for a variety of assignments, estimated to be around 80 per cent
for recruitment and 20 per cent for development purposes. The specic details were:
64 per cent male, 30 per cent female, 6 per cent not stated.
52 per cent mid/upper level leader and 19 per cent senior executive.
Age: median 43; range 20–74 years.
62 per cent large organisations (>5000 employees) and 19 per cent medium sized
organisations (100–4999).
70 per cent publicly traded; 19 per cent privately held.
Measures
1. The competency inventory measures 30 competencies aimed at capturing the
behaviours associated with managerial success (Korn Ferry, 2017a.) It is FC-IRT scored
using structured competencies and benchmarked with a large international leadership
comparison group. It was then mapped onto the Joiner & Josephs (2006) Leadership
Agilities model for reporting purposes.
2. The respondents were also asked to rate a number of questions to identify the level of
change in their job and environment. These questions were scored and classied into
‘Stable’, ‘Evolution’ and ‘Revolution’ environments.
3. The respondents also rated themselves on their current personal engagement levels
using a 10-statement inventory (r’tt = .82). These were classied into Average and High
Engagement.
Results
Finding 1: Meaningful differences between different change environments
There is a clear difference across 11 of the 30 competencies that differentiate between
those who ‘cope’ and those who ‘thrive’ with the different levels of change in their
business contexts.
Persuading is one example of this. Mapping onto the ‘Stakeholder Agility’ domain
it is dened as ‘Using compelling arguments to gain the support and commitment of
others.’ The leaders that thrive in all business environments have signicantly higher
levels of Persuasion skills. It looks like this is a core skill needed to navigate and thrive in
most leadership roles.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 5
Figure 1: The relationship between Persuading Competency and Engagement
in different change contexts
Finding 2: Change as a continuum
Unlike the data on personality traits and drivers (Bywater & Lewis, 2017) this study does
not suggest a new competency set being needed in the most extreme (‘revolution’)
change environments. Instead change looks like a continuum where the same set of
competencies carries the leader through.
This was a common nding. The list of competencies which are required across all
change levels, together with the Leadership Ability that they map onto, is in Table 1.
Table 1: The key competencies possessed by the most engaged leaders
Competency needs to be highly engaged in high
change environment
Leadership Agility from Joiner &
Josephs (2006)
Global Perspective Context Setting
Manages Conflict Stakeholder Agility
Instils Trust Stakeholder Agility
Persuades Stakeholder Agility
Builds Networks Stakeholder Agility
Being Resilient Self Leadership Agility
Action Oriented Self Leadership Agility
Manages Ambiguity Creative Agility
Drives Results No clear map
Ensures Accountability No clear map
Optimises Work Processes No clear map
6 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
These 11 competencies map reasonably well onto the 4 domains of Leadership Agilities
(Joiner & Josephs, 2006) with at least one of these competencies being represented for
each Agility. Stakeholder Agility comes up on four occasions suggesting that productive
relationships with others have a positive effect upon the engagement levels of the leader
in a wide variety of ways.
Finding 3: Transactional Competencies can be motivational
There are a number of competencies in Table 1 that are important for engagement but
are missing (‘no clear map’ in Table 1) from this transformational Leadership Agility
model (Joiner & Josephs, 2006.) These competencies include Driving Results, Ensuring
Accountability and Optimising Processes. The behaviours described by these are quite
operational and come from the Results Domain (Korn Ferry, 2018) – energising,
focussing and driving efciencies.
The competency ‘Drives Results’ is one example of this (see Figure 2). It does not
map onto the Leadership Agility Model but instead it is dened as “Consistently achieving
results, even under tough circumstances.”
Figure 2: The relationship between Drives Results Competency and
Engagement in different change contexts
The importance of this competency across all change levels suggests that one of the other
ways to stay engaged as a Leader even in really high change environments seems to be
to focus on results and get ‘stuff’ done. Many sports have a similar mantra – in tough
situations, get some easy ‘points’ on the board.
Finding 4: Strong convergence with results from other studies using 360 data
This list of competencies in Table 1 looks very similar to external published data using
360 data showing the most important competencies to be successful (Korn Ferry, 2018.).
This has been reproduced in Table 2. This table shows the key competencies linked with
overall performance across all levels in organisations.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 7
Table 2: The key competencies correlated with overall performance
across all levels in organisations
This convergence between these alternative assessment methods reiterates the signicance
of these competencies in organisations. In essence – making things happen and driving
for results gets rewarded (Bywater, 2019.)
Finding 5: Forgotten Gems
It is noticeable that a few competencies seem to lose their traction in really high change
environments. These include:
Nimble Learning.
Self-Development.
Develops Talent.
Directs Work.
‘Develops Talent’ is one example of this (see Figure 3). It maps on to Stakeholder Agility.
It is dened as ‘Developing people to meet both their career goals and the organisation’s
goals.’ At low levels of change it is quite a powerful differentiator between the most
engaged leaders and the rest. At high levels of change (‘revolution’), however, this drops
to zero.
Figure 3: The relationship between Develops Talent Competency and
Engagement in different change contexts
Competency Competency ‘Factor’
Action oriented Results
Drives results Results
Ensures accountability Results
Source – Korn ferry (2018)
8 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
This suggests that these competencies lose their ability to motivate and reinforce
Leaders in very high change environments. This may be because leaders feel that they are
too busy or too unclear about the future to use these behaviours. This is a shame because
much of the research on employees in organisations shows that they desire coaching and
development as part of their psychological contract and a loss of focus on this can reduce
engagement levels in the teams around these individuals. (Korn Ferry Hay Group, 2015).
Discussion
We would suggest that the results show:
1. The amount of change that is experienced by a leader affects their engagement levels.
This three-level classication of Stable, Evolution and Revolution continues to show
useful and measurable trends in the competencies needed to cope with this.
2. At high levels of change, the desire to achieve ‘something’ can be a goal in itself and
can be self-motivational. This was not evident in some of the theoretical models of
coping in a VUCA environment.
3. There was less clear differentiation about the skills needed in Highest Change
environments than was seen when using Traits and Drivers (Bywater & Lewis, 2017).
This suggests that the things that keep people really engaged in a truly VUCA
environment run deeper than behavioural competencies. These are called Traits and
Drives.
4. Finally, this data looks at what it takes for Leaders to cope and thrive in high change
environments. However, Bywater & Lewis (2017) found generally elevated levels
of engagement at these senior levels, especially in high change environments. The
challenge for leaders is thus to infect their teams with these levels of engagement
– taking the time to develop and engage others around them. Some ‘nice to do’
competencies such as Developing Others look like forgotten gems in very high change
environments. These should be easy coaching conversations.
The authors
James Bywater and James Lewis are consultant business psychologists at Korn Ferry.
References
Aberdeen Group (2007). Competency management: The link between talent management and
optimum business results. http://www.assess.co.nz/pages/AberdeenStudy.pdf
Bawaney, S. (2016) Leading in a VUCA World. https://www.executivedevelopment.com/
leading-vuca-world
Briner, R. (2015.) What’s the evidence for... change management? HRM Magazine. July
27. http://www.hrmagazine.co.uk/article-details/whats-the-evidence-for-change-
management
Bywater, J. & Lewis, J. (2017). Leadership: What does it take to remain engaged as a
leader in a VUCA world? Assessment & Development Matters, Vol.9 No.4. BPS.
Bywater, J. (2019). Leadership ready reckoner: What does it take to cope, survive and
thrive as a leader in a VUCA world? Assessment & Development Matters, 11(1), 20–25.
Hughes, D. (2015). Leadership Assessment for a VUCA World. International Congress
on Assessment Center Methods, 3rd November https://www.assessmentcenters.
org/Assessmentcenters/media/2015-SanDiego/Leadership-Assessment-for-a-VUCA-
World.pdf
Joiner, B. & Josephs, S. (2006.) Leadership agility. San Francisco: Jossey Bass.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 9
Korn Ferry Hay Group (2015.) Organizational Climate Global Climate Norms Update –
Supporting collateral for Styles and Climate 2.0. November 20, Korn Ferry Hay Group
Korn Ferry (2016). Engaging through change. Korn Ferry.http://engage.kornferry.
com/engaging-through-change-report-blog
Korn Ferry (2017a). Korn Ferry’s Four Dimensional Executive Assessment – Research guide and
technical manual. Korn Ferry. https://dsqapj1lakrkc.cloudfront.net/media/sidebar_
downloads/KF4D_Executive_Manual_FINAL.pdf
Korn Ferry (2017b). The ViaEdge Technical Manual. Korn Ferry. https://dsqapj1lakrkc.
cloudfront.net/media/sidebar_downloads/82205-viaEdge-Technical-Manual.pdf
Korn Ferry (2018). Korn Ferry Leadership Architect Global Competency Framework
– Research Guide and Technical Manual https://dsqapj1lakrkc.cloudfront.net/
media/sidebar_downloads/82277-KFLA-TM-NAV_reposted_032018.pdf
Kouzes, J. M., Posner, B. Z. (1995). The Leadership Challenge. San Francisco: Jossey-Bass.
Kutz, M.R. Bamford-Wade, A. (2013). Understanding contextual intelligence: A critical
competency for today’s leaders. Emergence: Complexity and Organization, 15, 5580.
Lombardo, M.M. Eichinger, R.W. (2000). High potentials as high learners. Human
Resource Management, 39(4), 321–330.
Luthans, F, Youssef-Morgan, C.M. & Avolio, B.J. (2007) Psychological capital; Developing the
human competitive edge. New York: Oxford University Press.
McClelland, D.C. (1973.) Testing for competence rather than for ‘intelligence.’ American
Psychologist, 28, 1–14.
Petrie, N. (2015) Vertical leadership development. Center for Creative Leadership. http://
www.ccl.org/wp-content/uploads/2015/04/VerticalLeadersPart1.pdf
Royal, M. & Agnew, T. (2011.) The enemy of engagement: Put an end to workplace frustration
and get the most from your employees, (New York: AMACOM).
ACRONYMS & ABBREVIATIONS
Each issue we’ll list three common acronyms in use in psychometric circles. Feel free to send in
suggestions!
VUCA Volatile, Uncertain, Chaotic and Ambiguous.
ATU Assistant Test User (a grade of membership of RQTU).
HEXACO Honesty/humility, Emotionality, Extraversion, Agreeableness,
Conscientiousness and Openness (proposed as an alternative to the
Big Five).
10 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Working with diversity: Defining and
assessing intercultural competence
Ali Shalfrooshan, Philippa Riley & Mary Mescal
Key digested message
In this article, the authors share the development of an online tool for the Metropolitan
Police Service to assess ‘the condence, empathy and capability to work, engage and
deal with a wide range of different cultures’.
Developing intercultural skills
WITH THE process of globalisation rapidly accelerating in the 21st century,
employees in many organisations are experiencing more frequent intercultural
encounters (Thomas & Inkson, 2004). As part of a strategy to adapt and
survive in this new environment, organisations are becoming increasingly aware of the
importance of developing intercultural skills (Rockstuhl et al., 2011). These skills are
relevant to expatriate work, managerial transfers and temporary work assignments, and
to those who are working in environments that are culturally diverse.
The Metropolitan Police Service (MPS) arguably support one of the most diverse
communities in the world. The MPS recognised that to work effectively with diverse
communities, recruitment decisions needed to assess the behavioural preferences of
candidates. They specically wanted to develop an online assessment to assess the
‘condence, empathy and capability to work, engage and deal with a wide range of
different cultures’. To deliver this objective, a behavioural framework for Intercultural
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 11
Competence was developed and, on this basis, a reliable and valid test was designed to be
used in the sifting stage for Police Constable assessment. To date, over 40,000 applicants
have completed the assessment.
The challenge
London covers an area of 620 square miles and a population of 7.2 million. This area is
currently home to over 42 communities where at least 300 languages are spoken and over
14 faiths practised. The MPS has a requirement to work with communities across London,
to achieve its commitment of policing through consent, through enhanced engagement
and participation and improved community condence. A key element of this is for
recruitment decisions to identify new recruits who are able to work effectively with diverse
communities and deliver a service that meets their needs.
The MPS recognised that an assessment could be incorporated into their existing
recruitment process to help them identify individuals who had ‘Intercultural Competence’,
i.e. those who have the potential to work effectively with diverse people and communities.
To achieve this, the organisation engaged a consultancy to dene the core psychological
constructs underpinning Intercultural Competence and design a suitable psychometric
tool to measure the preferences and attitudes that underpin this construct.
Development of the behavioural framework
The rst step was to undertake a literature review to determine whether there was
a credible scientic basis for creating a tool of this type. Related research, tools and
concepts were identied and reviewed, including Cultural Intelligence (Ang et al., 2011),
The Multicultural Personality (Van Der Zee & Van Oudenhoven, 2013) and Intercultural
Competence (Gertsen, 1990). The literature review identied constructs which had clear
parallels with Intercultural Competence as dened by the MPS and demonstrated that
valid assessment of such constructs was possible.
A thorough review of the academic literature was conducted. Four useful models were
identied from this review and these are outlined below:
Cultural intelligence: Refers to ‘an individual’s capability to function effectively in
situations characterised by cultural diversity’ (Ang et al., 2011, p.582). The Earley and
Ang conceptualisation of the construct incorporates four components, which are: a
‘metacognitive’ component, a cognitive component, a motivational component and a
behavioural component.
Intercultural competence: Is simply dened as ‘the ability to function effectively in
another culture’ (Gertsen, 1990, p.341). Intercultural competence is theoretically
composed of three components: an affective dimension which focuses on psychological
factors; a cognitive dimension which focuses on how people categorise information
from their external environment; and a behavioural dimension which refers specically
to communicative behaviour.
The multicultural personality: Work on ‘The Multicultural Personality’ has attempted
to dene the personality characteristics involved in working successfully in cultures
with different norms and rules (Van Der Zee & Van Oudenhoven, 2013). This model
has drawn on the ‘Big Five’ factors of personality, focusing on those facets and traits
that are relevant to operating in this environment. The authors found four such traits,
including openness, emotional stability, social initiative and exibility.
Universal-diverse orientation: Is dened as ‘an attitude toward all other persons which
is inclusive yet differentiating in that similarities and differences are both recognised
12 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
and accepted; the shared experience of being human results in a sense of connection
with people and is associated with a plurality or diversity of interactions with others’
(Miville et al., 1999, p.292).
To establish the behaviours associated with intercultural competence, 20 interviews were
undertaken with Subject Matter Experts within the organisation. Two separate methods
were used: Critical Incident Technique, in which interviewees are asked to provide specic
examples of effective and ineffective behaviour in relation to cultural competence; and
‘Repertory Grid Technique’, in which interviewees are asked to compare and contrast
effective and ineffective individuals in the role with relation to cultural competence.
A content analysis was carried out on the interview outputs. In total, 245 separate
behaviours were identied. Using the literature research and job analysis, these behaviours
were then classied into six constructs by three occupational psychologists. The constructs
identied were:
1. Empathy.
2. Relationship-building.
3. Open-mindedness.
4. Resilience.
5. Flexibility.
6. Orientation towards learning.
Development of test content
Items for the initial trial version of the questionnaire were written drawing upon the
supporting literature underpinning the construct and related scales and the behavioural
framework. A paired statement approach was taken to minimise the impact of socially
desirable responding, with both items being positively worded but with each pair
featuring a construct-relevant item and a distractor item. Participants are required to
select which of the statements is most like them.
The initial item set was trialled with 296 Police Constables. Job performance data
was also gathered from the trial group so that the validity of the items could feed into
item selection. A nal set of items were selected using item facility (i.e. the mean rating
of a given item), the distribution of responses across the rating scale for that item, item
discrimination, and the correlation between the item and job performance ratings given
by managers. On this basis, a nal set of 60 items were chosen and norm groups created.
These items, alongside a Participant feedback report were then implemented on an
online platform.
Outcomes
The ICCA has been shown to be signicantly predictive of managers’ ratings of Police
Constable’s behaviour, with a correlation of r=0.20, (signicant at the p<.001 level) found.
Although a modest correlation, this is uncorrected for range restriction. This result indicates
that higher scores on the ICCA were associated with higher job performance ratings.
The internal consistency reliability of the ICCA is 0.81, exceeding EFPA guidelines.
Signicant correlations have been found between the ICCA and other components
of the selection process. A correlation between ICCA and a Behavioural and Values
questionnaire was 0.27, with the highest correlation (.35) between the ICCA and
the Value of Fairness and Respect, providing construct validity evidence for the
questionnaire.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 13
Signicant correlations have been found between the ICCA overall score and a
Situational Judgement Test, also used at the sifting stage, of 0.36.
Summary
The assessment has now been completed by over 40,000 candidates and was used as part
of the sifting process to put ‘cultural competence’ at the heart of the MPS’s recruitment
strategy. Prior to this point, the MPS had undertaken signicant activities to encourage
applicants from all communities within London to apply. However, the process had not
fully taken account the need for Police Constables to effectively interact with people
from communities and backgrounds that were different to their own. This online tool
addressed this critical requirement. As the Deputy Mayor for Policing and Crime, Stephen
Greenhalgh stated following its launch: ‘this new policy is about competence rather than
colour’.
Additional research is needed to provide more evidence of the online assessment’s
impact on the MPS. Nevertheless, the project demonstrates the value of occupational
psychology and its ability to support the design of assessments that potentially have a
wider positive impact on communities.
The authors
Ali Shalfrooshan is Head of International Assessment R&D at PSI and a member of the
DOP Youth Employment working group.
Philippa Riley, C-Psychol, is Director of Product Development at Propel International.
Mary Mescal is a Managing R&D Consultant and works in PSI’s international R&D team.
References
Ang, S., Van Dyne, L. & Tan, M.L. (2011). Cultural intelligence. In R. J. Sternberg & J. D.
Kaufman (Eds.) The Cambridge Handbook of Intelligence (pp.582–602). Cambridge, UK:
Cambridge University Press.
Gertsen, M.C. (1990). Intercultural competence and expatriates. International Journal of
Human Resource Management 1, 341–362. doi: 10.1080/09585199000000054
Miville, M.L., Gelso, C.J., Pannu, R. et al. (1999). Appreciating similarities and valuing
differences: The Miville-Guzman Universality-Diversity Scale. Journal of Counseling
Psychology, 46, 291–307.
Rockstuhl, T., Seiler, S., Ang, S., Van Dyne, L. & Annen, H. (2011). Beyond general
intelligence (IQ) and emotional intelligence (EQ): The role of cultural intelligence
(CQ) on cross-border leadership effectiveness in a globalized world. Journal of Social
Issues, 67, 825–840. doi:10.1111/j.1540-4560.2011.01730.x
Thomas, D.C. & Inkson, K. (2004). Cultural intelligence: People skills for global business.
London: McGraw-Hill.
Van Der Zee, K.I. & Van Oudenhoven, J.P. (2013). Culture shock or challenge? The role
of personality as a determinant of intercultural competence. Journal of Cross-Cultural
Psychology, 44, 928–940. doi:10.1177/0022022113493138
14 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Good practice for the specialist assessor
(dyslexia)
Katrina Cochrane
I
HAVE BEEN a specialist teacher and assessor for 20 years now, having trained with
the Dyslexia Institute in 1999, before it became Dyslexia Action. My tutor was the
inspirational Wendy Goldup. I was able to complete my Level 7 Postgraduate Diploma
and gain my Associate Member of the British Dyslexia Association (AMBDA) in one
year (which was an incredible amount of work!) – it now takes a minimum of two years.
When I started assessing, I used the WRAT-3 (Wide Range Achievement Test – Third
Edition), TOWRE (Test of Word Reading Efciency), CTOPP (Comprehensive Test of
Phonological Processing), Hedderley Sentence Completion test, and Turner and Ridsdale
Digit Memory test. It is interesting that the WRIT (Wide Range Intelligence Test) is still
going strong, although it seems to be that fewer and fewer children know who ‘Coltrane’
is (even the actor) now or who have even seen a tuba. I have assessed in Youth Offending
Teams (YOTs) and young offenders generally get full marks for the word ‘Testify’ though.
I was always drawn to the assessment rather than the teaching side and, although I was
side-tracked into management, I feel I was always an assessor at heart and have returned to
that eld. As a reviewer for the Assessment Practicing Certicate (APC) renewals for the
British Dyslexia Association (BDA), I read many reports and hope you nd the following
tips useful whether you are an APC holder or write reports with your Level 7 qualication.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 15
1. Be active
Keep your knowledge and Continuing Professional Development (CPD) up-to-date
constantly. Things change so quickly, and I could conceivably be assessing today the
way I had been trained in 1999; without any CPD, I would be unaware of new tests, new
guidance and best practice. If you have AMBDA, then make sure you are ‘Active AMBDA’,
showing your CPD every three years to the BDA; or if you were given an APC but have let
it lapse, consider renewing it. It is no small undertaking to renew your APC (a colleague of
mine recently referred to it as dancing naked on the table!) but the condence one gets
post-renewal, when you know you are doing it right in the eyes of your peers, is enormous.
2. Keep up-to-date with new guidelines and report formats
SASC (SpLD Assessment Standards Committee) have recently introduced two new report
formats for consultation. Over 300 people submitted their views and in June 2019 the
revised formats will be produced. The pre- and post-16 report formats can be used in
conjunction with the old formats until June 2020, when the new format will be mandatory.
SASC have recently introduced an ‘amnesty’ so that anyone who carried out a course
leading to an APC at any time can apply for an APC for a xed time. This only applies to
new applications.
3. Keep up-to-date with new tests
There are several new tests, such as the WRAT-5 and WIAT-III (Wechsler Individual
Achievement Test – Third Edition), that have been approved by SASC. SASC is the
umbrella organisation for the three providers of an APC – BDA, Dyslexia Action and
Patoss (Professional Association of Teachers of Students with SpLD). SASC also have a
subcommittee, STEC (SpLD Test Evaluation Committee), who look at and review all new
tests. The old versions of tests are still usable, e.g. the WRAT-4 and WIAT-II, until the old
test forms are no longer for sale, but the differences should be noted.
4. Use SASC
Even if you are not an APC holder and assess with your Level 7 qualication, the SASC
website (www.sasc.org.uk) is where you will nd up-to-date guidance for an assessor.
There is information about trainees writing reports, increased CPD for initial APCs and
copies of all test reviews carried out by STEC. You can also join and attend the AGM and
conferences. New guidance from DfE is that a report carried out at any age can be used
for applications for Disabled Students Allowance (DSA), if the assessor was a current APC
holder at the time of writing and the report was carried out to SASC standards. If you do
not have an APC and assess children, you might want to bear this in mind and inform
parents about this. However, with the ‘amnesty’ you may be able to gain an APC if you
did not have one previously.
5. Do your detective work – even before the assessment itself
The observations you record are so important – the qualitative being, perhaps, more
important than the quantitative information. The background history needs to be very
detailed so that you can make the observations and perhaps include additional tests or
questionnaires. It is important to look at strategies the student is using for the subtests
(e.g. TOMAL (Test of Memory and Learning) sub-vocalisation), look at their pencil grip
when writing, can they concentrate fully; and make notes on these observations as you go
along. If the student is dyspraxic, they may have difculty following instructions; look at
16 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
the pattern of the test results – do they get the rst ve wrong and then get the rest correct
when they start understanding what they have to do? If the student has ADHD, do they
only start to pay attention when the test gets hard enough? Equally, with adults, do they
have the ability to use metacognitive visual strategies to do the TOMAL-2 test fairly well?
All of this is vital information.
6. Don’t work in isolation
I still run test results past former colleagues to be sure of my diagnosis. Join forums, go
to conferences and chat to your fellow delegates so that you can start those networks if
you don’t have them.
7. Signpost and refer
If you suspect visual difculties, such as Irlen Syndrome, Attention Decit Hyperactivity
Disorder (ADHD), Autism Spectrum Disorder (ASD) or dyspraxia/Developmental
Coordination Disorder (DCD), do not attempt to diagnose any condition that you are
not qualied to. Do refer to the GP if you see any signs of stress and anxiety also.
8. Empower the reader
The reader of the report could be a parent, a student at university, an adult wanting to
understand their difculties after many years of struggling, or a classroom teacher. Make
the reader feel empowered to do more, nd out more, and help themselves or the child
they teach. Make your recommendations helpful and detailed so that they can join a Local
Association, buy appropriate resources or apply for DSA. If they need to apply for DSA
and are eligible, explain how they go through that process. Make recommendations that
are pertinent, individual and will really support the person being assessed and the people
around them.
9. Look at strengths as well as weaknesses
Point out what the dyslexic learner is good at, as well as relative weaknesses. Many dyslexic
people feel they are stupid and pointing out the areas that they do well is so important.
Kate Saunders, the previous CEO of the BDA, always used to say ‘Let the child or adult
walk out of the assessment six inches taller than when they arrived’. A recent report I
read concerned an adult who had left school without any qualications, had taught him/
herself to read and had achieved a Masters’ qualication. The report had no mention of
how well they had done and gave no indication that they had achieved a huge amount in
difcult circumstances. You could be the only person who really understands the struggle
and can make that person feel proud of what they have achieved.
10. Explain the impact
What is the impact of the identied difculties in the classroom, in work, or in the
student’s studies? If the person has poor working memory, what does that mean on a
day-to-day basis? Have you actually addressed this in your report? If not, go back and put
it in.
11. Be accurate and adhere to condentiality at all times
Check and check again, and don’t make the mistakes that I sometimes see. The reports
you write may be in existence for tens of years and be taken from primary to secondary
to university and beyond. They may be used in tribunals also. Errors that I see include
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 17
calculating an overall general ability score when the discrepancy between subtest scores is
statistically signicant. Others are simple addition or cut and paste errors.
12. Finally – a note about GDPR
File your test papers and reports accordingly and ensure you have full indemnity
insurance if you work privately.
Twenty years on I am still as passionate about assessing and excited about all the
changes happening and hope to be doing this for the next twenty years!
The author
Katrina Cochrane is a Specialist Assessor with 20 years’ experience, and an APC reviewer
for the British Dyslexia Association. She is a Board member of SASC and a Member of
the BDA Accreditation Board. Katrina was the previous Head of Education and Policy at
the BDA before setting up her own training and assessment company, Positive Dyslexia,
in 2016.
Note from the Committee for Test Standards
The BPS are actively working with SASC to ensure that their recommendations are
appropriate for psychological practice. The BPS are also working on a CPD programme
for psychologists in understanding assessing for neurodiversity in general, of which
dyslexia is one condition.
The SASC website provides good information about which current tests measure which
criteria and offers advice on good practice.
For the assessment of adults, the BPS have our own, additional guidelines which can
be found here:
www.bps.org.uk/sites/bps.org.uk/files/Policy/Policy%20-%20Files/DOP%20
Psychological%20Assessment%20of%20Adults%20with%20Specic%20Difculties.pdf
Submission guidelines
Assessment & Development Matters (ADM) is published by the Psychological Testing Centre.
Its readership is members on the Register of Qualications in Test Use, holding BPS
Educational, Forensic and Occupational testing qualications.
The Editorial Team encourage submission of articles with a broad range of appeal,
aimed at test users working in educational, forensic and occupational settings.
For details of how to submit, please visit: https://ptc.bps.org.uk/information-and-
resources/assessment-and-development-matters-adm/submit-article
Submissions to: the Coordinating Editor as an email attachment, saved in MS Word
format: ayshea.king@bps.org.uk
18 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
In pursuit of organisational wisdom
Trevor E. Hudson
Key digested message
The concept of ‘wisdom’ is ancient but is something that, as studied by psychologists
as opposed to philosophers, has only recently received attention. If wisdom is
objectively measurable it has far reaching implications for how we understand
leadership, leadership development and potential. This study utilised an adapted
version of the Washington University Sentence Completion Test to investigate the
concept of ‘Organisational Wisdom’ (OW).
This study found evidence, that the newly developed Organisational Wisdom
Sentence Completion Test had statistically signicant correlations with leadership
interviews and positive correlations with internal measures of potential and
performance. When combined with related research the implications for identifying
and developing talent are far reaching.
Background
PSYCHOLOGY, much like any science, has its fair share of myth-busting moments.
Equally it has a role in adding much needed clarity and scientic constructs to
‘accepted wisdom’. Ironically one such area is wisdom itself and, while very little
has been written on the subject compared to other areas of psychology, it is an area of
growing interest and research (Ardelt, 2005).
If we accept the expert ‘delphi-method’ denition by Jeste et al. (2010), wisdom
should be something that organisations value. It is intelligence, for example, but with
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 19
the tempering of experience. Wise reasoning leaders are able to use unbiased social
judgement, be aware of varied contexts and appreciate and seek reconciliation of
differing viewpoints (Grossmann et al., 2016). It is insight, but not in a narrow eld
(Baltes & Staudinger, 2000; Smith & Baltes, 1990).
The Washington University Sentence Completion Test (WUSCT) measures a feature
of cognitive development, ego development (Loevinger & Wessler, 1970), which increases
with age and has ‘substantial conceptual overlaps’ with wisdom (Richardson & Pasupathi,
2005). Those who score higher on the WUSCT show increased capacity to ‘subvert the
ego’ (managing self) in decision-making and take multiple perspectives when considering
issues (Loevinger & Wessler, 1970).
A version of the WUSCT, adapted for ease of use within organisations, should produce
a measure of Organisational Wisdom (OW), an evolving, complex intelligence that can
be applied to organisational challenges. If this shares the features of Loevinger’s measure
then it could be useful for the identication of leadership talent, without suffering from
some of the contextual issues and biases of many existing measures (Downs & Swailes,
2013; Minbaeva & Collings, 2013; Thunnissen et al., 2013).
Research objectives
This study sought to create and test a beta psychometric measure of OW by adapting the
WUSCT to create an Organisational Wisdom Sentence Completion Test (OWSCT).
The hypothesis was that people who scored more highly in the OWSCT would also
be rated more highly for an organisation’s performance and potential measures and
correlate with perceived leadership potential as measured by a 360-degree feedback tool
and leadership interview questions.
The OWSCT should also correlate with age, since wisdom generally increases with age
(Heckhausen et al., 1989; Jeste et al., 2010) and should be gender neutral, since wisdom
is not gendered (Jeste et al., 2010).
Approach
The OWSCT was created by removing many of the sentence stems from the WUSCT that
would appear to have no bearing in an organisational setting (e.g. ‘Women are lucky
because…’(Loevinger & Wessler, 1970) and replacing them with items that similarly
measured cognitive development but had higher face validity for organisations (e.g.
‘Emotions…’ or ‘A good manager…’).
Participants are instructed to complete the sentence stems in any way to give a
‘complete’ answer that is ‘true for them’. Manual scoring can take up to 20 minutes per
stem, using a complex scoring guide drawn from the WUSCT guidance.
The OWSCT used 18 sentence stems compared to the WUSCT’s 36, although shorter
versions have been tested with success (18 (Loevinger, 1985) and 12 (Hy et al., 2013)),
and used the detailed construction and scoring of the original WUSCT (Loevinger &
Wessler, 1970) to build a psychometric with an organisational lens.
This study administered the OWSCT to 29 randomly chosen nancial services
professionals across a variety of grades. All participants were asked ve leadership
decision-making questions which were then scored on a ten-point scale by a leadership
expert without any knowledge of their OWSCT scores. The OWSCT results were also
compared, where available, with 360-degree feedback data (N=7), performance data
(N=23) and potential ratings (N=16) as part of the annual talent review.
20 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Analysis
It was important to ensure the OWSCT shared key properties of the original WUSCT. The
test should, for example, be gender neutral. The results of a T-test showed that this was
indeed the case (p<0.05). The test should also correlate with age, but this was not found
in the study (r=0.009). Participants were aged from 28 to 59.
The main component of the hypothesis was that the OWSCT would have a moderate
correlation with both the leadership interview questions and the 360-degree feedback.
There was no correlation with the 360-degree feedback results (N=7, r=0.015) but the
OWSCT did correlate signicantly with the leadership interview scores (N=29, p<0.05,
r=0.432).
When analysing performance data (N=23) there was not a statistically signicant
relationship (p>0.1) although the correlation was in the direction predicted (r=0.14). Potential
ratings (N=16) had a stronger positive correlation (r=0.371) although not signicant.
Discussion
As with the WUSCT, the OWSCT reported as gender neutral but it unfortunately did
not show the expected correlation with age. The OWSCT score should increase with age,
since this is true of the WUSCT and wisdom in general. The small range of ages in this
study (27 to 58) may serve to explain this; the full range of cognitive levels do not typically
emerge until after middle-age (Kegan, 1995, Chapter 5) and lower levels of cognition
are mainly seen in adolescence (Loevinger & Wessler, 1970). This would need to be
monitored in future studies to ensure that the OWSCT has not moved too far away from
the underlying theories upon which it is built.
The study did not nd the OWSCT correlated with the 360-degree feedback tool
although very few participants in the sample had completed one. Far more promising,
although not of the magnitude predicted, was the correlation with the leadership
questions. This supports the idea that the level of thinking measured by the OWSCT is
analogous with the quality of thought needed to respond to leadership problems and
adds some credence to the idea that those with higher OW will be better problem solvers
in the real world.
Since the OWSCT should be measuring the respondents’ capacity for dealing with
complexities and ambiguities it is perhaps unsurprising there was a positive correlation
with the participants perceived potential. However, the correlation was not large or
signicant in this sample (which was quite small due to inconsistently applied talent
processes) it was larger than both the 360-degree feedback and the performance
correlation which might suggest that when people are assessing potential they are able
(similarly to the OWSCT) to remove context from their assessment.
Implications of organisational wisdom
When looked at in conjunction with existing work using the WUSCT and variant
measures, the results are promising. More specically, OW, if replicated in further
research, could work as a partial proxy for the key leadership skills of perspective taking,
non-ego decision-making, working with complexity and uncertainty and ultimately ‘good
judgement’. Giving us the opportunity not only to recruit and develop smart leaders, but
also wise ones.
Using the ndings from this study, and combining it with other research our
‘organisational sages’, ‘post-conventional’ leaders (those of the higher orders of
development) are more likely to:
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 21
Make decisions that incorporate a number of perspectives (e.g. generational, gender,
economic, social) (Kegan, 1995, Chapter 6; McAuliffe, 2006; Solomon, Marshall, &
Gardner, 2005).
Manage complexity more effectively (Beck & Cowan, 2014, Chapter 5).
Show up with greater self-awareness (Beck & Cowan, 2014, Chapter 6; Grant, 2007).
Generate more innovation (De Meyer, 2007).
Are transformational (Rooke & Torbert, 2008).
Demonstrate higher empathy and a sense of universal care (Beck & Cowan, 2014,
Chapter 5; Carlozzi, Gaa, & Liberman, 1983).
Having leaders with these capabilities would be a boon to any organisation as long as
they can be identied and developed. Unfortunately the individuals are hard to nd
(less than 10 per cent of the population), developmental journeys are several years long
for changing whole cognitive levels (Kegan, 1995, Chapter 8) and transformational
development journeys may not be possible within one organisation (e.g. sabbaticals).
To have the required validity for talent identication, the OWSCT will need renement
using more varied industries and larger samples. Should growing evidence continue to
support the idea of ‘Organisational Wisdom’ it will have implications for leadership
development: consider the ineffectiveness of an executive coach, no matter how many
coaching qualications they hold, with less wisdom than the person they are coaching
(Grant, 2007), versus the support of a coach with more.
The early evidence for OW as a legitimate and wide-ranging construct is promising, if
a little piecemeal at this stage. The next stage of research will rely on longitudinal tracking
of the wisest leaders and their efcacy over time.
The author
Trevor Hudson is a learning and leadership expert and the Senior Learning Business
Partner at King, the games and entertainment specialists.
References
Ardelt, M. (2005). Foreword. In R.J. Sternberg & J. Jordan (Eds.) A handbook of wisdom:
Psychological perspectives (pp.xi–xviii). New York: Cambridge University Press.
Baltes, P.B. & Staudinger, U.M. (2000). Wisdom. A metaheuristic (pragmatic) to
orchestrate mind and virtue toward excellence. The American Psychologist, 55(1),
122–136.
Beck, D.E. & Cowan, C.C. (2014). Spiral dynamics: Mastering values, leadership and change.
Hoboken, NJ: John Wiley & Sons.
Carlozzi, A.F., Gaa, J.P. & Liberman, D.B. (1983). Empathy and ego development. Journal
of Counseling Psychology. https://doi.org/10.1037/0022-0167.30.1.113
De Meyer, A. (2007). Strategic epistemology – innovation and organizational wisdom.
In E.H. Kessler & J.R. Bailey (Eds.) Handbook of organizational wisdom (pp.357–375).
London: Sage.
Downs, Y. & Swailes, S. (2013). A capability approach to organizational talent
management. Human Resource Development International, 16(3), 267–281.
Grant, A.M. (2007). Reections on coaching psychology. In J. O’Connor & A. Lages
(Eds.) How coaching works (pp.209–242). A & C Black.
22 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Grossmann, I., Sahdra, B.K. & Ciarrochi, J. (2016). A heart and a mind: Self-distancing
facilitates the association between heart rate variability, and wise reasoning. Frontiers
in Behavioral Neuroscience, 10, 68.
Heckhausen, J., Dixon, R.A. & Baltes, P.B. (1989). Gains and losses in development
throughout adulthood as perceived by different adult age groups. Developmental
Psychology, 25(1), 109–121.
Hy, L.X., Le X., H., Mclean, D.C., Brown, T., Wert, L. & Bell, L.G. (2013). Reliabilities
of Rating Rules for the WUSCT 12-Item Form: New Ogive Rule. PsycEXTRA Dataset.
https://doi.org/10.1037/e589352013-001
Jeste, D.V., Ardelt, M., Blazer, D., Kraemer, H.C., Vaillant, G. & Meeks, T.W. (2010).
Expert consensus on characteristics of wisdom: A Delphi method study. The
Gerontologist, 50(5), 668–680.
Kegan, R. (1995). In over our heads: The mental demands of modern life. Cambridge, MA:
Harvard University Press.
Loevinger, J. (1985). Revision of the sentence completion test for ego development.
Journal of Personality and Social Psychology, 48(2), 420–427.
Loevinger, J. & Wessler, R. (1970). Measuring ego development. San Francisco: Jossey-Bass
Inc Pub.
McAuliffe, G. (2006). The evolution of professional competence. In C. Hoare (Ed.)
Handbook of adult development and learning (pp.476–496). New York: Oxford University
Press.
Minbaeva, D. & Collings, D.G. (2013). Seven myths of global talent management. The
International Journal of Human Resource Management, 24(9), 1762–1776.
Richardson, M.J. & Pasupathi, M. (2005). Young and growing wiser: Wisdom during
adolescence and young adulthood. In R.J. Sternberg & J. Jordan (Eds.) A handbook of
wisdom: Psychological perspectives, pp.139–159. New York: Cambridge University Press.
Rooke, D. & Torbert, W.R. (2008). Organizational transformation as a function of CEO’s
developmental stage. Organization Development Journal, 16(1), 11–28.
Smith, J. & Baltes, P.B. (1990). Wisdom-related knowledge: Age/cohort differences
in response to life-planning problems. Developmental Psychology. https://doi.
org/10.1037//0012-1649.26.3.494
Solomon, J.L., Marshall, P. & Gardner, H. (2005). Crossing boundaries to generative
wisdom: An analysis of professional work. A handbook of wisdom. https://doi.
org/10.1017/cbo9780511610486.012
Thunnissen, M., Boselie, P. & Fruytier, B. (2013). Talent management and the relevance
of context: Towards a pluralistic approach. Human Resource Management Review, 23(4),
326–336.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 23
TEST REVIEW UPDATE
Since the previous edition of ADM, the BPS Test Review team have published their
reviews of OPTO (published by Master International A/S, 2017) and the EQ 360 2.0
(Emotional Quotient 360; published by Multi-Health Systems Inc, 2011).
OPTO is a comprehensive online personality measure, suitable for use with adults in
occupational contexts (particularly for specialists, managers and executives) for selection,
recruitment, development and personal assessment. The measure is based on the Five
Factor Model and provides information on 20 subscales within eight dimensions: Agility,
Compliance, Cooperation, Delivery, Efciency, Inuence, Innovation and Resilience.
The test contains 155 items (14 to 24 items per dimension), rated on a 7-point Likert
scale, and takes approximately 20 to 30 minutes to complete. RQTU members can sign
into the PTC website and read the full review at https://ptc.bps.org.uk/test-review/opto.
The EQ 360 2.0 is a multi-rater version of the EQ-i 2.0, which measures emotional
intelligence, and is primarily administered online. It is suitable for use with adults
and is intended for use in occupational and educational contexts. The EQ 360 2.0
comprises 15 subscales organised into 5 composites: Self-perception (Self-regard;
Self-actualisation; Emotional self-awareness), Self-expression (Emotional expression;
Assertiveness; Independence), Interpersonal (Interpersonal relationships; Empathy;
Social responsibility), Decision making (Problem solving; Reality testing; Impulse
control) and Stress management (Flexibility; Stress tolerance; Optimism). There is also
a Wellbeing indicator (Happiness) and four validity scales (omission rate, inconsistency
index, positive impression management and negative impression management). The
test contains 133 items, rated on a 5-point Likert scale, and takes approximately 20 to 40
minutes to complete. RQTU members can sign into the PTC website and read the full
review at https://ptc.bps.org.uk/test-review/emotional-quotient-360-0.
Jo Horne
24 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Journey to the forensic register &
beyond… reflections and insights
Julie-Anne Aspin
IN JANUARY 2015, the British Psychological Society (BPS) launched its Qualications in
Testing in a Forensic Context. In response, the National Offender Management Service
(NOMS), now HMPPS (Her Majesty’s Prison and Probation Service), established a
national steering group in forensic testing: the aim being to consider a strategic response
to the BPS Qualications. Looking back, I seem to remember feeling pretty mystied
during this time and unsure what all of this really meant for me and my colleagues as
Forensic Practitioners. Following attendance at a couple of early stakeholder engagement
events around this issue, it slowly dawned on me that this issue wasn’t going to go away,
particularly given that the qualications were deemed to be ‘best practice’ guidelines.
NOMS therefore adopted those competence standards with the intent to establish
its own internal register to meet and evidence adherence to them. I remember the
discomfort and growing anxiety I felt at the time with the realisation that not only
had I not completed any Level A or Level B occupational testing (unlike some of my
colleagues), but additionally I was not currently on the Register of Qualied Test Users
(RQTU). I felt a sense of dissonance, in that I really wanted to be part of the steering
group to take this initiative forward, but was aware that I did not hold the qualications
myself. This realisation quickly served to move me from ‘contemplation’ to ‘preparation’,
to use the Prochaska & DeClemente (1983) stages of change terminology.
I decided to look into the qualications more closely, and also linked in with
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 25
signicant colleagues who were leading or linked to the steering committee. I discovered
that following the PTC announcement that the grand parenting window for the British
Psychological Society (BPS) Forensic Context test user qualications was due to close on
31 December 2016, I was deemed a ‘route 2 applicant’ which meant I needed to provide a
portfolio submission of evidence of how I met the standards in Forensic Testing Contexts.
Gulp!
I was required to complete a portfolio submission for applicants applying for Level
1 Assistant Test User and Level 2 Test User qualications in forensic testing contexts. I
have to say, there was a feeling of sheer horror at the thought of compiling my exemplar
portfolio in order to meet the criteria to be awarded Level 1 and Level 2 enabling
registration with the RQTU, particularly in the timescale I had available, which amounted
to a few months.
Despite my anxiety, I took time to reect on the fact that I was part of a national
steering committee reviewing test use within our forensic setting and needed to keep
focused and mindful of not only what I wanted to achieve personally and professionally,
but also what we wanted to achieve as a committee in order to support colleagues wanting
to achieve level 1 and level 2 within our organisation. In reality, what better way to advise
and support others than taking the journey myself?
As my progress in evidencing the portfolio progressed, so too did my own sense of
self-efcacy. I began to believe that I could complete this portfolio: that I just needed to
remain motivated and driven and maintain the momentum in completing and compiling
the supporting evidence. My inner drive and motivation coupled with the encouragement
and support of committee members enabled me to press on with the task at hand. Very
daunting! I must admit, though, that the information provided by the PTC around what
to submit and giving examples of suitable evidence was very helpful, and assisted me to
select appropriate examples – many of which I had forgotten about, or led away in a dust
covered folder from my university days!
I was required to review each of the modules within both the Level 1 Assistant Test
User (ATU) and Level 2 Test User (TU) templates, and to provide evidence of my
competence within these areas and how I felt I had met and demonstrated this. I must
admit, the whole process felt rather overwhelming at the outset due to the sheer breadth
of work/experience I needed to review and comment on. Christmas was also not far away
so it was very important for me to block out time and space to complete this work.
I made a plan and took a module at a time starting with the ATU template. I took time
to reect and think about what each area of competence was and what I could say or show
to demonstrate I met this competence. As I worked through the template systematically, it
became evident that many elements were, in reality, easily evidenced through examples of
current work and practise or through retrieving evidence/examples of older work. When
I say ‘older’ I truly mean post grad!
I think initially what really struck me was a combination of realising what I needed to
demonstrate competence in, and knowing what I could use to evidence it. I continued
in this vein, working through the competencies systematically. The process of working
through the modules and subsets in a systematic way kept me focused and enabled me to
select evidence from within my day to day practice, to reference as supporting evidence.
Before long I had completed the portfolio submission template for the Level 1
ATU qualication. This really boosted my condence and served as a positive platform
for moving on to the Level 2 Test User modules. I felt these modules were more
straightforward to evidence: either through direct training I had attended, through my
26 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
clinical reports to evidence competency, or through my work in discreet secondments –
for example, working within the Probation service to advise staff on how best to engage
and assess cases falling ‘in scope’ of the Offender Personality Disorder (OPD) pathway.
For some areas of the template I had a number of pieces of evidence to choose from,
reecting my broader knowledge and experience within these elements. Reecting on
the process as a whole, I could also see where I had less supporting evidence and more
‘gaps’, which was helpful in focusing some of my future CPD needs.
I am pleased to say that my completed portfolio was sent on to the PTC veriers on
29 December 2016. I am even more pleased to say that on 16 May 2017 I was thrilled
to receive the news that my application for Test User: Forensic Context Route 2 had
successfully passed. All of my hard work and commitment had paid off!
Through the process of submitting my portfolio I was able to truly unpick and
understand why competence as an Assistant Test User and Test User is so crucial to
everything we do as practitioners using psychological tests. Within the realms of testing,
it was evident just how quickly we can lose much of this understanding over time through
potential ‘slippage’ in our practice, or by just doing things in a certain way because that’s
‘just what you do’ rather than truly reecting on why and how we need to use tests and
ensure condence in those we are testing, as well as our other stakeholders. As time
passed I was extremely fortunate to have the opportunity to meet my own professional
development aims of becoming a Veried Assessor (VA). Having a cohort of VA’s was a
crucial rst step for the steering committee, to ensure we had the necessary infrastructure
to successfully roll out the in-house materials and workshops we have developed.
I think my ultimate success and journey of evolving as a Veried Assessor was the nal
‘acid test’: not only was I completely on board with the BPS testing standards, but I also
wanted to be an advocate for others looking to demonstrate and develop competence
in this area. I can honestly say my journey was almost 360! These nationally recognised
qualications are essential for anyone using psychological tests who wishes to be on the
Register of Qualications in Test Use (RQTU). Many employers and clients refer to the
RQTU to check competence of test users.
As an ambassador for the Assistant Test User and Test User Qualications I look
back and wonder how and why it could have taken me and our profession so long to get
to this point! I am truly excited about the coming months ahead and the committee’s
continued roll out of the Qualications to ensure any professional using tests within a
forensic context is appropriately qualied and adheres to these standards of competence.
Author
Julie-Anne Aspin works within HMPPS London Psychology Services and is a HCPC
Registered Forensic Psychologist and Verier for the British Psychological Society’s
Psychological Testing Centre.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 27
Good practice
This article can also be found on the PTC website.
Constraints on disclosure and sharing of raw data from psychometric and
structured professional judgments (SPJ)
The last 12 to 18 months have seen an increase in formal requests for disclosure of ‘raw’
psychometric testing data and information such as score sheets, rating grids and other materials
used by test users in their assessment and reporting activities. It has been particularly notable in
the forensic domain, but also in others. Because of this, CTS has provided a position statement
based on existing standards, EFPA guidelines and ITC published standards and policy.
This short guideline contains the position that has been agreed by CTS formally
in respect of this area. RQTU Registrants and Psychologist Test Users are therefore
encouraged to note the approach agreed to this, at times challenging aspect of our
practice and roles; and to share this position statement wherever relevant.
1. Issue
The use of psychometric testing and structured professional judgements (which the BPS
Psychological Testing Centre treat as psychometric tests in the Forensic Context Testing
Standards for example) is common and often directed by the instructing authority, in the
process of preparing testing outcome reports for formal administrative and other bodies.
This process, by necessity involves the collation and gathering of information and
data from the test taker, which informs the expert opinion regarding the outcome of the
assessment. The outcome of testing forms an integral part of the overall report provided
and the recommendations made.
If access to the data gathered during testing is requested by the authority, certain
constraints apply which mean that this data can only be shared on request in very limited
circumstances.
2. Position
2.1. The International Test Commission (ITC) Test Users Guide and the ITC Handbook
provide the basis for responses to requests for such data following testing. Both ITC
publications draw together international standards for test use and guidelines for
practice that reect international best practice. They incorporate the BPS Testing
Standards, EFPA Testing Standards, Canadian Psychological Testing Standards,
Australian Testing Standards and APA Testing Standards amongst others. As all
the tests we commonly use are of UK, wider European, American, Australian or
Canadian origin we can assert the ITC as the relevant reference point.
2.2. The ITC Handbook states at Chapter 29 (9.04):
‘(a) The term test data refers to raw and scaled scores, client/patient responses to test
questions or stimuli, and psychologists’ notes and recordings concerning client/patient
statements and behaviour during an examination. Those portions of test materials that
include client/patient responses are included in the denition of test data. Pursuant to a
client/patient release, psychologists provide test data to the client/patient or other persons
identied in the release. Psychologists may refrain from releasing test data in order to protect
a client/patient or others from substantial harm or misuse or misrepresentation of the data
28 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
or the test, recognising that laws may regulate the release of condential information under
these circumstances (b) In the absence of a client/patient release, psychologists provide data
only as required by law or court order.’
2.3 The ITC Test User Guidelines similarly, but in greater detail describe testing as:
‘Any attempt to provide a precise denition of a “test” or of “testing” as a process, is likely to fail
as it will tend to exclude some procedures that should be included and include others that should
be excluded. For the purpose of these Guidelines, the terms ‘test’ and ‘testing’ should be interpreted
broadly. Whether an assessment procedure is labelled a ‘test’ or not is immaterial. These
Guidelines will be relevant for many assessment procedures that are not called ‘tests’ or that seek
to avoid the designation ‘test’. Rather than provide a single denition, the following statements
attempt to map out the domain covered by the Guidelines.
Testing includes a wide range of procedures for use in psychological, occupational and
educational assessment. Testing may include procedures for the measurement of both
normal and abnormal or dysfunctional behaviours.
Testing procedures are normally designed to be administered under carefully controlled or
standardised conditions that embody systematic scoring protocols. These procedures provide
measures of performance and involve the drawing of inferences from samples of behaviour.
They also include procedures that may result in the qualitative classication or ordering of
people (e.g. in terms of type).
Any procedure used for ‘testing’, in the above sense, should be regarded as a ‘test’,
regardless of its mode of administration; regardless of whether it was developed by a
professional test developer; and regardless of whether it involves sets of questions, or requires
the performance of tasks or operations (e.g. work samples, psycho-motor tracking tests).
Tests should be supported by evidence of reliability and validity for their intended purpose.
Evidence should be provided to support the inferences that may be drawn from the scores
on the test. This evidence should be accessible to the test user and available for independent
scrutiny and evaluation. Where important evidence is contained in technical reports that
are difcult to access, fully referenced synopses should be provided by the test distributor.
The test use Guidelines presented here should be considered as applying to all such
procedures, whether or not they are labelled as “psychological tests” or “educational tests”
and whether or not they are adequately supported by accessible technical evidence.
Many of these Guidelines will apply also to other assessment procedures that lie outside
the domain of “tests”. They may be relevant for any assessment procedure that is used in
situations where the assessment of people has a serious and meaningful intent and which,
if misused, may result in personal loss or psychological distress (for example, job selection
interviews, job performance appraisals, diagnostic assessment of learning support needs).
The Guidelines do not apply to the use of materials that may have a supercial resemblance
to tests, but which all participants recognise are intended to be used only for purposes of
amusement or entertainment (e.g. life-style inventories in magazines or newspapers).’ 1
2.4 The ITC Test User Guidelines further describe the requirements for communication
and management of test results:2
“2.8. Communicate the results clearly and accurately to relevant others
Competent test users will:
2.8.1. Identify appropriate parties who may legitimately receive test results.
1 https://www.intestcom.org/les/guideline_test_use.pdf p.13
2 https://www.intestcom.org/les/guideline_test_use.pdf p.22
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 29
2.8.2. With the informed consent of the test takers, or their legal representatives, produce written or
oral reports for relevant interested parties.
2.8.3. Ensure that the technical and linguistic levels of any reports are appropriate for the level of
understanding of the recipients.
2.8.4. Make clear that the test data represent just one source of information and should always be
considered in conjunction with other information.
2.8.5. Explain how the importance of the test results should be weighted in relation to other
information about the people being assessed.
2.8.6. Use a form and structure for a report that is appropriate to the context of the assessment.
2.8.7. When appropriate, provide decision-makers with information on how results may be used to
inform their decisions.
2.8.8. Explain and support the use of test results used to classify people into categories (e.g. for
diagnostic purposes or for job selection).
2.8.9. Include within written reports a clear summary, and when relevant, specic
recommendations.
2.8.10. Present oral feedback to test takers in a constructive and supportive manner.’
2.5 Therefore psychologists, those in training and/or under supervision, and test
users should not release raw test data (as dened above), other than to another
appropriately competent professional/test user directly, and with the consent of
the test taker.
2.6 Test User professional reports are provided to inform and summarise the
outcome(s) of assessments and offer recommendations based on expert opinion
arising from the use of tests. The information gathered during testing and
assessment will include collateral, reasoned descriptors for ratings (often in a grid
format) and may be recorded to inform the assessment outcome. As described at
2.2–2.4 above this constitutes raw data and should not be disclosed other than at
2.5 above, or by a relevant court order.
2.5 In the case of other frequently used tests such as the WAIS 4, ADOS, PAI, MMPI,
MCMI and BDI for example which are all ‘pencil and paper’ tests we should treat
the raw and scaled scores similarly in terms of disclosure and reporting as per 2.2
above.
30 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
DEFINITIONS AND DISCUSSIONS
Machine learning
The AI eld is continually developing, and machine learning is one of its cornerstones.
Essentially, it is all about constructing algorithms which enable a device to revise its
predictions as a result of data input – to ‘learn’ from its experiences. To do this it
adopts several strategies, such as reducing the diversity of input data by clustering
together items which have elements in common; adopting higher-order classications
based on perceived similarities of response or characteristics; or detecting anomalies
in large data inputs. The process may be researcher-driven, known as supervised
machine learning, or data-driven (unsupervised learning) where the algorithms
concerned become operative whenever relevant patterns or anomalies of input are
detected, without the need for human input.
Some researchers have argued that machine learning, in the psychometric eld, is
nothing new: that statistical techniques such as cluster analysis and factor analysis are
doing effectively the same thing, and have been in use since psychometric testing rst
began. Examples of this might be the use of machine learning focused on large-scale
educational assessment, such as the e-assessments in use in the US. It is a moot point
how much relevance these types of psychometrics have for the UK context, where
signicant epistemological differences in educational content mean that the use of
automated essay-marking programmes, for example, is much less common.
Others argue that the large-scale opportunities in modern data analysis make
modern machine learning so very different as to count as a novel application. More
signicantly for the UK context, too, it has been claimed that combining machine
learned interpretations of Big Five behavioural traits with Big Data analysis using
information generated by social media use will effectively obviate psychometrics
altogether. Analysis of social media use, it is claimed, allows evaluation of collaborative
problem-solving, social regulation etc. Whether this is realistic or not (how are the Big
Five behavioural traits being evaluated anyway? And what attempts have been made to
assess the validity of the claim?), it may be a perception that we, as an industry, need
to challenge directly.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 31
Pioneers and landmarks in intelligence testing
Towards a consensus model of IQ:
John L. Horn and John B. Carroll
Hugh McCredie
Key digested message
This article tracks the emergence of a consensus, three level (or ‘stratum’) hierarchical,
model of cognitive ability consisting of narrow abilities (Level 1/Stratum I), broad
abilities (Level 2/Stratum II) and a single general ability (Level 3/Stratum III).
Introduction
THE SEVENTH article in this series (ADM 10:3) reviewed the immediate post-
WWII attempts by Vernon and Cattell to explore the existence of hierarchy in
cognitive abilities. Vernon postulated a single general factor, g, at the highest level
subsuming two Major factors Verbal-educational (v:ed) and Practical (k:m) at the next
level, which in turn transcended the Minor factors and then the Specic factors. At the
apex of Cattell’s atter hierarchy were two facets of g: uid (gf) and crystallized (gc)
general ability.
John Leonard Horn (1928–2006) completed his PhD thesis (unpublished, 1965)
under Cattell at the University of Illinois. Horn & Cattell (1966) reported:
‘evidence for hypotheses stipulating that general visualization, uency, and speediness
functions, as well as uid and crystallized intelligence functions, are involved in the
performances commonly said to indicate intelligence.’ (p.253)
Horn (1988) gave the full list of higher order factors comprising what became known as
the Cattell-Horn model as follows:
Gc: Knowledge or Crystallized Intelligence.
Gf: Broad Reasoning or Fluid Intelligence.
Gv: Broad Visual Intelligence.
Ga: Broad Auditory Intelligence.
SAR: Short-Term Acquisition and Retrieval.
TSR: Long-Term Storage and Retrieval.
Gs: Speed in Dealing with Intellectual Problems and CDS [Correct Decision Speed].
Gq: Quantitative Thinking.
Horn made it clear that he and Cattell did not object to the Spearman g (General ability)
factor in principle, but that:
‘the problem is that the substantive nature of the factor varies from one study, or one
application, to another… none these factors represents the entire repertoire of human
abilities or the same general factor common to all mental tests.’ (p.651)
32 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
John Carroll (1916–2003) was an American psychologist specialising in psychometrics,
mainly at the University of Chicago. In his massive study Human cognitive abilities: A survey
of factor-analytic studies (Carroll, 1993) he said:
‘This book is… an outcome of work I started in 1939, when…I became aware of L.L.
Thurstone’s… “primary mental abilities”…I sensed the eld’s need for a thoroughgoing
survey and critique of the voluminous results in the factor-analytic literature on cognitive
abilities.’ (p.vii)
Human cognitive abilities: A survey of factor-analytic studies (1993)
The starting point of Carroll’s study was his compilation of ‘a le… reporting correlational
or factor-analytic investigations of cognitive abilities’ (p.78) from which he prioritised
450+ data sets of different cognitive test responses for detailed reanalysis of their factors.
As he explained, ‘The majority of the datasets employed variables in only one or a few
cognitive domains’ (p.121). Carroll used a form of analysis involving rotation of factors
to ‘simple structure’ that minimised the loading of variables onto a multiplicity of factors.
‘When simple structure rst-order factors were found to be signicantly correlated… they
were subjected to higher-order factor analysis’ (p.89).
The initial reanalysis of the datasets yielded 2272 rst-order factors, 542 second-order
factors, and 36 third-order factors… a total of 2850 factors (p.135). From this point:
‘the factors that appeared to be similar, from different datasets, were considered together
and in many cases reinterpreted in the light of detailed examination of the variables (or
factors) having high loadings on them. On this basis, factors were classied into broad
domains to be considered.’ (p.90)
The initial broad domains which Carroll investigated were those suggested by Horn and
others (1) Language, (2) Reasoning, (3) Memory and Learning, (4) Visual, (5) Auditory,
(6) Idea generation. (7) Speed, (8) Knowledge, (9) Psychomotor skill and a miscellaneous
category. Table 1 summarises the rst- and second-order factors in Carroll’s model as they
emerged from his factor analyses of these domains (pp.599–615).
Table 1: Carroll’s Level 1 and Level 2 Factors
Level 2 Factors Level 1 Factors Data
sets
Loading
on L2
Fluid Intelligence (Gf) Induction 19 .64
Visualisation 10 .62
Sequential reasoning 7 .55
Quantitative reasoning 6 .65
Ideational fluency 3 .60
Fluid intelligence 2 .54
Spatial relations 2 .46
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 33
Level 2 Factors Level 1 Factors Data
sets
Loading
on L2
Crystallized Intelligence (Gc) Verbal ability 23 .71
Language development 11 .78
Reading Comprehension 7 .75
Sequential reasoning 7 .69
General information 5 .73
Ideational fluency 5 .68
Spelling 5 .67
Numerical facility 5 .55
General memory (Gy,aka Gsm) Associative memory 13 .66
Memory span 8 .36
Learning ability 5 .56
Miscellaneous 5 .52
Meaningful memory 5 .46
Free recall memory 3 .79
Broad visual perception (Gv) Visualisation 22 .77
Spatial relations 16 .60
Mechanical knowledge 4 .70
Perceptual speed 3 .47
Broad auditory perception (Gu
aka Ga)
Resistance to audio stimulus distortion 2 .44
Broad retrieval ability (Gr, aka
Glr)
Ideational fluency 31 .68
Originality/creativity 7 .58
Fluency of expression 4 .76
Figural fluency 4 .67
Sensitivity to problems 4 .55
Associational fluency 4 .52
Writing ability 3 .81
Figural flexibility 3 .63
Oral production 3 .53
34 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Level 2 Factors Level 1 Factors Data
sets
Loading
on L2
Broad Speediness (Gs) Perceptual speed 27 .65
Numerical facility 8 .66
Choice reaction time 7 .81
Speed of test performance 6 .70
Writing speed 5 .63
Speed of mental comparison 4 .76
Visualisation 4 .73
Movement time in RT paradigms 4 .53
Multi-limb coordination 3 .60
Table 1: Carroll’s Level 1 and Level 2 Factors continued
Carroll contrasted his model with those of Vernon and Cattell, agreeing with Vernon
in recognising a very broad factor g, at Level 3, which derived from ‘the common factor
variance of the second-stratum factors of Cattell’s model’, as extended by Horn, above.
However, he rejected Vernon’s v:ed and k:m factors as probably different mixtures of
broad factors at Level 2. (pp.638–639). He also commented that ‘intelligences’ described
by Gardner show a fairly close correspondence with the broad domains of ability
represented by factors found at Level 2 with ‘linguistic intelligence’ corresponding best to
Gc, ‘musical intelligence’ to Gu/Ga, ‘logical-mathematical intelligence’ to Gf and ‘spatial
intelligence’ to Gv (p.641).
The higher-stratum structure of cognitive abilities: Current evidence supports g and about ten
broad factors (2003)
This paper, published in the year of Carroll’s death, attempted to resolve the issue as to
whether there was a third level of cognitive ability, similar to Spearman’s g, above and
beyond Cattell’s expanded Gf-Gc model (Horn, 1988) which informed McGrew et al.’s
(1991) 16-scale version of the Woodcock-Johnson Psycho-Educational Battery (WJ-R).
Carroll conducted both exploratory and conrmatory factor-analyses on a large
dataset (N=2261) dataset reported in the (1991) WJ-R technical manual. Results are
reported in Table 2.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 35
Table 2: Amounts of variance explained by exploratory and confirmatory factor analyses of WJ-R data
Key: g: General Intellectual Ability; Glr: Long-Term Retrieval; Gsm: Short-Term Memory; Gs:
Processing Speed; Ga: Auditory Processing; Gv: Visual-Spatial Thinking; Gc: Comprehension-
Knowledge; Gf: Fluid Reasoning: Gq: Mathematics.
Thus, the amount of variance explained by the Level 3, g, lies between 51 and 62 per
cent. The amount of additional variance explained by the Level 2 factors indicates their
proximity to g; the lower the amount the closer the relationship. On both indices Gf is the
closest to g, but is still distinctive.
Consensus at last?
McGrew (2005) reported that in 1999, the year after Cattell’s death, Horn and Carroll
had met privately:
‘to seek a common, more meaningful umbrella term that would recognise the strong
structural similarities of their respective theoretical models, yet also recognize their
differences. This sequence of conversations resulted in a verbal agreement that the phrase
“Cattell-Horn-Carroll [CHC] theory of cognitive abilities” made signicant practical sense,
and appropriately recognized the historical order of scholarly contribution of the three
primary contributors.’ (pp.148–149)
Table 2, above, gives empirical support to the CHC theory which, according to Kaufman
(2009, p.91), ‘has formed the foundation for most contemporary IQ tests’.
The author
Dr Hugh McCredie CPsychol, FBPsS, FCIPD is an independent researcher and writer. His
most recent publications are McCredie, H. (2018) Improving managerial talent: Practical
psychology for human resourcing and learning & development professionals and a chapter in
Cripps, B. (Ed.) (2017) Psychometric testing: Critical perspectives.
References
Carroll, J.B. (1993).Human cognitive abilities: A survey of factor-analytic studies. New York:
Cambridge University Press.
Carroll, J.B. (2003). The higher-stratum structure of cognitive abilities: Current evidence
supports g and about ten broad factors. In H. Nyborg (Ed.) The scientic study of
general intelligence: Tribute to Arthur R. Jensen (pp.1–20). Oxford: Elsevier Science/
Pergamon Press.
Level 322222222
Factor
g Glr Gsm Gs Ga Gv Gc Gf Gq
Exploratory
covariances %
51.07 5.31 6.55 7.51 5.68 4.35 7.63 3.10 8.76
Confirmatory
Covariances %
61.94 5.11 6.52 8.82 4.25 3.25 3.62 2.42 3.69
36 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
Horn, J.L & Cattell, R.B. (1966) Renement and test of the theory of uid and
crystallized general intelligences, Journal of Educational Psychology, 57(5),
(pp.253–270).
Horn, J.L. (1988). Thinking about human abilities. In J.R. Nesselroade & R.B. Cattell
(Eds.) Handbook of multivariate experimental psychology (2nd edn, pp.645–685). New
York: Plenum.
Kaufman, A.S.(2009).IQ Testing 101. New York: Springer Publishing.
McGrew, K.S. (2005). The Cattell-Horn-Carroll Theory of Cognitive Abilities. In D.P.
Flanagan & P.L. Harrison (Eds.) (2012).Contemporary intellectual assessment:Theories,
tests, and issues (pp.151–179). New York: Guilford Press.
McGrew, K.S., Werder, J.K. & Woodcock, R.W. (1991). WJ-R technical manual. Allen, TX:
DLM Teaching Resources.
Following our discussions of the Big Five personality factors, we continue with their counterparts:
the Dark Triad (narcissism, psychopathy and Machiavellianism).
The Dark Triad 1: Narcissism
Classically, narcissism has been seen as excessive self-admiration and vanity – the
term originates from an ancient Greek myth about a youth, Narcissus, who was full of
admiration for his own personal beauty – so much so that he spent much of his time
admiring his own reection in pools of water, and eventually fell in and drowned.
Those high in trait narcissism are often characterised by an exaggerated sense of
self-importance, a lack of empathy and a need for admiration from others. Politicians
and media personalities often score highly on trait narcissism, which tends to be
linked with a manipulative approach to achieving their positions. It is often described
as being associated with fragile egos, resulting in hostile and aggressive reactions to
criticism or perceived challenge. At its extreme, it may become classied as narcissistic
personality disorder, in which selshness and a sense of entitlement dominates the
person’s entire social activity, in that they are concerned only with power and success,
and not with more balanced forms of social interaction.
Narcissism isn’t the same as egocentricity or even a strong sense of self-condence, mainly
in the way that those high in trait narcissism see themselves as more important than others
and therefore as being entirely justied in ignoring or overriding other people’s concerns.
Researchers have identied a number of characteristics of those with high trait narcissism,
including: having inated views of their own abilities and characteristics; believing that they
are better than others; having an excessively high sense of agency; seeing themselves as
unique and special; being selsh; and being extremely oriented towards their own personal
success. As a trait, however, narcissism appears to be normally distributed, and a moderate
level of narcissism has been claimed to be characteristic of good managers and leaders. At
the extreme other end of the scale, those low in narcissism may come across as insecure
and lacking in self-esteem.
Reference
Paulhus, D.L & Williams, K.M. (December, 2002). The Dark Triad of personality:
Narcissism, Machiavellianism, and psychopathy. Journal of Research in Personality, 36(6),
556–563.
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 37
38 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
New series: Then and now
In which we invite our older readers to discuss how their experience of psychology and psychometrics
has changed over the years.
Then and now: Statutory assessment and
educational psychology
Douglas Thomson
‘The milkman’s horse had wandered in the fog.’
ANYONE recognising this line of text (it comes from an early version of the Neale
Analysis of Reading Ability) may remember a time when the statutory assessment
of children and young people with special educational needs in England was very
different from how it is now.
In the 1960s the assessment of pupils requiring special schooling must have been quick
and simple. The Summereld Report (1968) calculated that an educational psychologist
could ‘examine’ and report on two pupils every working day, an output inconceivable 50
years on, but the arithmetic was clear:
Ten examinations per week would fully occupy one psychologist. This estimate includes not only the time
required for psychological assessments, which may have to be spread over more than one occasion, but
also the time necessarily involved in consultations with school staff, discussions with parents, travelling
and the preparation of reports. During a school year, 400 such examinations could be carried out…
This was not a recommendation, it was not expected nor, I hope, was it ever aspired to.
Statutory assessment rested with the school medical ofcers and this was an attempt to
estimate how many educational psychologists would be required if we were to be involved
in every special school placement.
Things began to change in the 1970s as Educational Psychology Services were expanding.
The service that I joined had only just started to expand, each of us was responsible for 70 to
80 schools, and the options for supporting pupils in school were very limited. We could offer
advice to their teachers and parents, we could ask the peripatetic remedial teacher to work
with them or, exceptionally, we could seek a special school placement.
In the mid-1970s and following the publication of Circular 2/75, the responsibility
for decision-making moved from the school doctor to the educational psychologist
and a new Special Education assessment process was introduced. Forms SE1 to 3 were
brief reports, sometimes handwritten. The SE1 was completed by the school, SE2 by the
school medical ofcer and SE3 by the educational psychologist. SE4 was a summary and
action sheet, completed by an educational psychologist or adviser in special education,
requesting a special school placement. Although several agencies were involved, this was
not a multi-professional assessment, as we understand the term today. The assessment, in
my experience, was driven by the educational psychologist who decided who would be
formally assessed, when assessments would begin and how they would conclude.
The Warnock Report of 1978 introduced some fundamental and important changes in
thinking and practice. Assessment would no longer assign learners to one of eleven statutory
Assessment & Development Matters Vol. 11 No. 3 Autumn 2019 39
categories of handicap. The task instead was to identify learners’ special educational needs,
and what they required, through changes in curriculum and organisation, to make better
progress in school. Assessment became less a matter of ‘test and tell’ and more an on-going
process of ‘plan, do and review’. Aspirations and expectations rose, budgets increased, and
the opportunity to meet learners’ needs in mainstream settings encouraged teachers and
psychologists to be more creative in their interventions.
The tools themselves have changed over time. The 1980s saw increasing use of
classroom observation, criterion referenced measures and skill-based checklists. Existing
tests have been updated: children today would have little knowledge of milkmen with
horses or, perhaps, of fog. Psychometric measures, which fell out of use, have made a
recovery. The WISC is now the WISC-V. Two subscales have become ve primary indices
and the test, previously supplied in a brown cardboard box, can now be administered and
scored using two laptops, and reports generated online.
Statutory assessments have become more open and transparent. Children are now
encouraged to contribute to their own assessments and parents have the right to do so.
Reports are now sent routinely to parents, and their rights of appeal are clear.
Assessments are genuinely multi-professional and a range of professionals, unimagined
in the 1970s, can be involved. Educational psychologists are no longer the only specialist
contributors from education. In the 1970s the only specialist teachers were teachers of the
deaf or visually impaired; but psychologists now work with teachers of learners with literacy
problems, language disorders, autism, behavioural difculties, and more besides. Assessment
is now a multi-agency process where speech and language therapists, physiotherapists and
occupational therapists, nurses and social workers are all invited and encouraged to contribute.
With so many contributors to an assessment, someone has to pull the threads together
and this is precisely what we educational psychologists can do by virtue of our training
and experience. We can make sense of complex and perhaps competing information; we
can work out what is happening in a young person’s life, and we can identify what can be
changed to ensure better outcomes.
As the new era in assessments began, many Educational Psychology Services were
able to do just this. Rather than simply providing some unique but narrow information
from tests and observations, educational psychologists would receive copies of all the
advice provided by the other contributors and use this to create a rounded picture of the
child or young person in context. In some authorities, however, this was controversial
– the educational psychologist was just one contributor among many – and, in other
authorities, this option fell victim to strict statutory timescales.
Apart from the increased openness which is very welcome, the biggest change has been
in the management of the assessment process. This is also very welcome. Where educational
psychologists once acted as judge, jury and executioner, the process and the decision-making
have moved, perhaps to panels but ultimately to ofcers within the authority who hold the
responsibility for the decisions, for any appeals which might follow, and also the budgets to
put the recommendations into place. This leaves the educational psychologist free to act
independently in the interests of the child or young person, which is just as it should be.
The author
Douglas Thomson qualied as an educational psychologist in 1977. He was formerly
Principal Educational Psychologist with Cumbria County Council and Principal
Psychologist for the City of Edinburgh Council.
Contact: douglas.thomson@orange.fr
40 Assessment & Development Matters Vol. 11 No. 3 Autumn 2019
With BeTalent’s Strengths Insight Tools and Training
DISCOVER AND DEVELOP YOUR
TEAM’S STRENGTHS WITH THE
BEST POSSIBLE RESULTS
•Are ideal for coaching, development and selection
•Provide in-depth analysis based on robust testing and research
•Give multiple perspectives of aspired, actual and overused strengths
•Predict individual potential and what contributes to high-performing teams.
•Offer clear, direct feedback based on extensive research with managers and leaders
•Offer multiple report options available for the individual, assessor/coach and teams
•Are available in card sort, psychometric questionnaires and a 360 delivery mechanism
BeTalent’s Strengths Insight Tools:
•A half day accreditation
•Public courses in London
•Distance learning available
•25% discount for RQTU members
•Dates and prices www.BeTalent.com
How do I become qualied? 0208 645 0222
Call us now on
BeTalent is an occupational psychology test publisher providing a
range of psychometric tools, 360s, card-sort exercises and case
studies on our platform, BeTalent.com. All of our psychometric products
are research lead and are fully tested against the BPS standards
EMPOWER
individuals to recognise their own strengths so they can develop stronger
working partnerships, demonstrate greater resilience and make more effective decisions.
MEASURE individuals’ core and distinctive strengths
LEARN what energises, excites and motivates individuals in your organisation
USE the results to maximise personal development and create highly-effective teams.
Why use Strengths Insight?
Call us now on 01904 464515
www.tmsdi.com
Discounts available for RQTU members - call us for details quoting RQTU-ADM19
The Team Management Profile is part of a unique set of individual and
team Profiling tools known as Margerison-McCann Team Management
Systems supported by ongoing research into how people work together.
Team Mangement Profile accreditation dates 2019
The Margerison-McCann Team Management
Profile is a powerful work-based psychometric
that gives you more perspectives on individual
performance and high energy teamworking. It
offers personal feedback based on extensive
research with managers into what creates personal
success and high performance teamworking.
It's a about the conversation
A full and objective review of the Team Management Profile Questionnaire
can be viewed on the British Psychological Society’s Psychological Testing
Centre website.
23 & 24 October
19 & 20 November
17 & 18 December
14 November - Manchester Airport
5 December- Malmaison, London
In-Company programmes available at your own location or via webinar
* Four 90-minute modules over 2 days
£1500 + VAT per person
Webinar * On-the-Ground
BPS advert 2019 new.qxp_Layout 1 23/07/2019 13:02 Page 1
Contents
1 Editorial
Nicky Hayes
2 Leadership: What competencies does it take to remain engaged as a leader in
a VUCA world?
James Bywater & James Lewis
10 Working with diversity: Defining and assessing intercultural competence
Ali Shalfrooshan, Philippa Riley & Mary Mescal
14 Good practice for the specialist assessor (dyslexia)
Katrina Cochrane
18 In pursuit of organisational wisdom
Trevor E. Hudson
24 Journey to the forensic register & beyond… reflections and insights
Julie-Anne Aspin
27 Good practice
31 Towards a consensus model of IQ: John L. Horn and John B. Carroll
Hugh McCredie
38 Then and now: Statutory assessment and educational psychology
Douglas Thomson
St Andrews House, 48 Princess Road East, Leicester LE1 7DR, UK
t: 0116 254 9568 f: 0116 247 0787 e: info@bps.org.uk
w: www.bps.org.uk
© The British Psychological Society 2019
Incorporated by Royal Charter Registered Charity No 229642