Access to this full-text is provided by Wiley.
Content available from British Journal of Educational Technology
This content is subject to copyright. Terms and conditions apply.
Br J Educ Technol. 2022;53:1937–1955.
|
1937
wileyonlinelibrary.com/journal/bjet
Received: 18 November 2021
|
Accepted: 4 April 2022
DO I: 10 .1111/ bje t .13 222
ORIGINAL ARTICLE
For to all those who have, will more be given?
Evidence from the adoption of the SELFIE tool
for the digital capacity of schools in Spain
Jonatan Castaño Muñoz1 | Artur Pokropek2 |
Lilian Weikert García1
© 2022 The Authors. B ritish Journal of Educational Technology published by John Wiley & Sons Ltd on behalf of British
Educational Research Association.
1Human Capital and Employment Unit,
European Commission, Joint Research
Centre, Seville, Spain
2Educational Research Institute (IBE),
Warsaw, Poland
Correspondence
Jonatan Castaño Muñoz, Human
Capital and Employment Unit, European
Commission, Joint Research Centre,
Seville, Spain.
Email: jcastanno@us.es
Funding information
European Commission
[Corrections made on 8th June 2022, af ter
first online publication: The third author’s
affiliation details have been corrected in this
vers ion.]
Abstract
This paper explores participation trends in interven-
tions that promote self- evaluation exercises on the ef-
fective use of digital technologies in schools. We use a
unique dataset consisting of 83,185 respondents from
924 Spanish schools that used SELFIE, a tool based
on self- reflection questionnaires that capture differ-
ent dimensions of school's digital capacity. We ben-
efit from a natural experiment situation caused by the
parallel use of SELFIE by two groups of schools. The
first group was externally selected as part of a repre-
sentative sample of Spanish schools. Conversely, the
second group voluntarily decided to use SELFIE as a
diagnostic tool for a subsequent self- evaluation exer-
cise. Moreover, a subset of schools were located in re-
gions where authorities embedded SELFIE in broader
digitalisation programmes. By comparing these
groups, it is shown that schools that decide to partici-
pate in SELFIE voluntarily are those with a lower initial
digitalisation level. It is also found that the promotion
of the use of SELFIE as part of public interventions
can increase participation but mainly attracts digitally
advanced schools. In conclusion, policy interventions
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution
and reproduction in any medium, provided the original work is properly cited.
1938
|
CASTA ÑO MUÑOZ et al.
INTRODUCTION
The presence of the Matthew effect, a cumulative advantage benefiting individuals and
institutions with more resources, has been demonstrated in different areas (Perc, 2014;
Rigney, 2010) including education (Kerckhoff & Glennie, 1999; Perc, 2014; Walber &
Sai, 1983) and digital technology usage (Mingo & Bracciale, 2016).
Research also demonstrates the existence of a Matthew effect in the usage of digital
technologies in education. Schools serving students from privileged backgrounds tend
to promote more sophisticated and creative uses of technology (Hohlfeld et al., 2008;
Reich, 2020). In addition, educational opportunities using technologies, even if these are
open, tend to be used more and better by individuals with good self- regulation and digital
skills (Castaño- Muñoz et al., 2017; Littlejohn et al., 2016; Yang et al., 2018).
aiming to develop the digital capacity of schools need
to plan how to reach those schools that need it more in
order to be more equitable.
KEYWORDS
digital divide, digital education, Matthew effect, self- evaluation,
self- reflection tools
Practitioner notes
What is already known about this topic
• Research has shown the existence of a Matthew effect in the usage of digital tech-
nologies in education.
• The promotion of schools self- evaluation exercises on digital education is a com-
mon policy intervention that is growing in importance.
• There is a surprising lack of attention to the inequitable effects that programmes
aiming to incorporate technologies in educational institutions may generate.
What this paper adds
• This paper investigates the self- selection trends and (un)equity effects of SELFIE,
an EU programme designed to prompt schools' self- evaluations of digital capacity.
• When schools decide autonomously, schools with low digital capacity levels tend
to participate in SELFIE more.
• Incorporation of SELFIE into broader public programmes enlarges participation in
SELFIE.
• Incorporation of SELFIE into broader public programmes over- attracts digitally ad-
vanced schools.
Implications for practice and/or policy
• Public policies promoting self- evaluation exercises on school digital capacity in
schools might be a good way for upscaling these exercises.
• However, these policies should be carefully designed to reduce inequalities and
reach these schools that need digitalisation more.
|
1939
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
When analysing the origins of the Matthew effect and its possible solutions, research has
focussed on the role of public interventions. Empirical literature suggests that in some set-
tings these interventions can increase the gap between low and highly resourced individuals
or institutions if corrective measures are not taken into account. Examples cover a range of
topics such as social policies (Pisoni, 2018), active market policies (Bonoli & Liechti, 2018)
or educational interventions (Pavolini & van Lancker, 2018).
However, there is a surprising lack of attention to the inequitable effects that programmes
aiming to incorporate technologies in educational institutions may generate. This is specially
important in a post COVID- 19 pandemic context, where the use of technologies has become
more important (Beardsley et al., 2021). This paper contributes to filling this gap by focus-
sing on a specific type of intervention on this topic: the promotion of school self- evaluation
exercises (Chapman & Sammons, 2013; Kampylis et al., 2016).
To shed light on this topic, we focus on the specific case of SELFIE, a European self-
reflection tool based on validated questionnaires and designed to provide information to
schools to facilitate a self- evaluation process of their effective use of digital technologies
(see: https://education.ec.europa.eu/selfie). To analyse whether this tool is being used more
by schools that already have a large digital capacity, we take advantage of a unique setting.
During the same time- period (April 2019– March 2020), two different groups of Spanish
schools participated in SELFIE. The first group filled the SELFIE questionnaires as part of
a voluntary self- evaluation exercise decided by themselves. Conversely, schools from the
second group were randomly selected to participate in a study aiming at measuring the aver-
age digital capacity of Spanish schools. By comparing digital capacity of the two groups, we
obtained valuable insights into the relationship between existing digital capacity of schools
and the decision of using self- evaluation tools on this topic.
LITERATURE REVIEW
School self- evaluation exercises and schools digitalisation
Demand for schools' accountability increased during the last century. Governments have
promoted external and standardised assessments of the schools (Grek et al., 2013) and
decentralised self- evaluation approaches performed by schools themselves (Croxford
et al., 2009; European Commission, 2020). Self- evaluations are based on an iterative cycle
of diagnosis and informed planning of strategies and actions (Chapman & Sammons, 2013)
that can lead to benefits to those schools that perform them compared to those that do not.
SELFIE, the programme covered in this paper, is a tool that has been designed to promote
the complete iterative cycle of schools' self- evaluations. Thus, participation (or not participa-
tion) in SELFIE can generate inequalities.
In general, self- evaluation exercises can lead to sustainable school improvements (Hall
& Noyes, 2009). The literature signals that these approaches are effective for organisational
development (Fullan & Watson, 2000; Høyrup, 2004) and identification of specific school
improvement needs (Nevo, 2001). Moreover, empirical research shows a link between par-
ticipation in schools' self- evaluation and enhanced teaching and learning quality (Hofman
et al., 2009) through the development of improvement plans (Caputo & Rastelli, 2014) and
(internal) data driven decision- taking (Cosner, 2011; Marsh et al., 2010; Williamson, 2016).
In order for these benefits to become a reality, the literature identifies the need for certain
conditions (Vázquez & Gairín, 2014) in all the self- evaluation phases: conduct, results and
feedback (Vanhoof et al., 2014). Some elements are the use of adequate data collection
instruments for diagnosis, the implementation of good communication and multistakeholder
collaboration processes, the definition of shared objectives or the provision formal support
1940
|
CASTA ÑO MUÑOZ et al.
to transformative actions including external support or expert advice (Antoniou et al., 2016;
Devos & Verhoeven, 2003; Vanhoof & van Petegem, 20 11).
Research has noted that not all schools participate to the same extent. Schools with an
appropriate school culture, greater needs for improvement in the topic covered, greater in-
novation capacity, and with an established professional learning community, participate in
self- evaluation exercises more (Schildkamp & Visscher, 2009; Vanhoof et al., 2009).
As schools are increasingly required to improve their use of digital technologies, public
authorities are promoting self- evaluation exercises in this direction. The usual approach is to
develop and/or promote tools that aim to support and facilitate school self- evaluation exer-
cises (Kampylis et al., 2015). In general, these tools take the form of self- reflection question-
naires covering different dimensions that facilitate the effective use of digital technologies
in educational settings and are meant to be answered by teachers and/or school leaders
(students are rarely included). Without pretensions to being exhaustive, some examples of
this type of tools are: Opeka and Ropeka in Finland, Digital Mirror in Estonia, eLEMER in
Hungary, Digital Schools of Distinction and eLearning Roadmap in Ireland, NAACE self-
review framework and 360° safe in the UK, the self- evaluation tool created in the e- school
programme in Croatia or the European commission's SELFIE tool [see Kampylis et al. (2016)
for a description and comparison of some of these tools].
Benefits of digital technology in education
Interventions aiming at digitalising schools are carried out under the assumption that digital
technologies can help to improve school operations and students' learning outcomes. Research
is ambiguous in this respect, but recent studies tend to show that digital technology can have
positive effects in learning outcomes when it promotes good instructional design principles.
In this sense, the literature signals the potential of technology for personalised inter ventions
such as targeted behavioural interventions and differentiated computer- assisted learning
(Escueta et al., 2017; Surma & Kirschner, 2020). It also signals the positive role that en-
hanced interaction (Bernard et al., 2009) can play. Innovative types of assessment (Kapsalis
et al., 2019) and well- designed online feedback (Fyfe, 2016; Hattie & Timperley, 2007) can
contribute to the implementation of good practices such as assessment of previous knowl-
edge, connection of previous knowledge with new concepts, and the provision of effective
guidance (Fyfe, 2016). The widened access to complementary online resources that the
use of technology entails can have a positive effect in learning too (Heppen et al., 2012),
specially when the resources contain additional embedded guidance such as annotated
examples (McLaren et al.,2016) and supportive videos (de Koning et al., 2018). Moreover,
online resources can facilitate the use of bimodal content and knowledge representations
(Moreno & Mayer, 2007) and the implementation of new effective pedagogical models in-
cluding flipped classroom (Cheng et al., 2018; van Alten et al., 2019).
In addition to traditional learning- outcomes, another important potential benefit of using
digital technologies in education is the development of the digital competence of students
(Carretero et al., 2017). Research has also shown the need for early and guided interven-
tions for effective development of students' digital competence (Fraillon et al., 2019; van Dijk
& van Deursen, 2014).
Conditions for an effective use of digital technologies in schools
The capacity to reap the benefits of incorporating technology in education depends on a se-
ries of prerequisites and characteristics in which schools differ. The literature has developed
|
1941
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
several interrelated theoretical concepts that aim to identify and integrate them. Three of the
most prominent concepts are digital maturity (Balaban et al., 2018), e- capacity (Vanderlinde
& van Braak, 2010) and digital capacity (Costa et al., 2021). Building on previous work and
definitions, this paper uses the term digital capacity to refer to the extent to which culture,
policies, infrastructure, and digital competence of students and staff support the effective
integration of technology in teaching and learning practices.
Despite their differences, all these concepts share a holistic and multidimensional ap-
proach. Consequently, they can be broken down into different dimensions that facilitate the
effective incorporation of digital technologies in education by schools (Voogt et al., 2011),
which may vary across schools and that derive from scientific research.
First, they cover the technical and infrastructural dimension, which is strongly related
to technology usage in the school (Tondeur et al., 2012). Second, they cover the techno-
pedagogical knowledge that teachers need to use the technologies for preparing lessons
(Meneses et al., 2012), teach in effective ways and innovate in the feedback and assessment
processes (Lachner et al., 2019). It has been found that pedagogical expertise is a prerequi-
site to judging the specific potential of digital technologies in different contexts and for differ-
ent activities (Lachner et al., 2019). In this context, a third dimension usually covered by the
theoretical constructs is access to professional development. This aspect plays an essential
role in equipping teachers and institutions with the right skills to make the right use of digital
technologies in their job (Fernández- Batanero et al., 2020). Moving to the students level, a
fourth dimension identified by some of the theoretical concepts above points out the impor-
tance of the development of the digital competence of students (Waycotte et al., 2010) for
a better use of technologies in education and as a learning outcome. Research shows that
some schools emphasise the acquisition of this competence more than others do (Fraillon
et al., 2019). Finally, all concepts encompass organisational and leadership aspects that
are identified as another set of drivers for efficient uses of digital technologies in schools
(Tondeur et al., 2012). Therefore, research signals the critical role of school culture and
the characteristics of school leaders (Chang, 2012; Kozma, 2008; Ottesttad, 2013; Tondeur
et al., 2012). Finally, a last aspect signalled to get the most out of digital technologies, is its in-
corporation as tools for internal and external community and network building (Kong, 2019).
Schools self- evaluation exercises and the digital capacity divide
Not all schools are equal regarding the characteristics and preconditions that facilitate the
effective use of technology covered in the section above. Consequently, some schools need
more help than others as they start from a worse position. In this respect, it can be said that
there is a “digital capacity divide” between schools.
Students from different schools are exposed to different uses of technology in and out-
side the school (Attewell, 2001; Hohlfeld et al., 2008). Research shows that more effective
and creative uses of technology are associated with schools where students come from
privileged backgrounds (Reich, 2020). Moreover, students in these schools receive greater
mentorship and personalised guidance from teachers and parents (Reich, 2020).
The promotion of self- evaluation exercises and tools for use of technologies more ef-
fectively aim to reach all school types; however, it is unclear if this is true. On the one
hand, schools with lower digital capacity may consider that the final transformations derived
from self- evaluation exercises fit their needs for digital improvement better than others. If
this were the case, literature predicts that they would use these tools more (Schildkamp
& Visscher, 2009) producing an equalising component and would contribute to closing
the existing “digital capacity divide” between schools. On the other hand, disadvantaged
schools may participate in school self- evaluations of their digital capacity less frequently
1942
|
CASTA ÑO MUÑOZ et al.
than advantaged ones. Some of the mechanisms identified by the literature as drivers of this
phenomenon are: a lack of motivation and information (Walber & Sai, 1983), bad attitude
towards evaluations and self- evaluations (Vanhoof et al., 2009), low usefulness of digitalisa-
tion when compared to other priorities (Schildkamp & Visscher, 2009) and uneven distribu-
tion of school leaders across schools in terms of experience, education (Loeb et al., 2010)
and interest on technology (Chang, 2012).
Moreover, schools with high digital capacity may be more innovative and open to change
and, consequently, find self- evaluation exercises more adapted to their, already digitalised,
practices and use them more (Schildkamp & Visscher, 2009; Vanhoof et al., 2009). If this
were the case, the use of self- reflection tools would be contributing to the Matthew effect
in educational systems (Kerckhoff & Glennie, 1999; Perc, 2014) by increasing the digital
advantage of already digitally advanced schools.
If the development of interventions promoting self- evaluation exercises on schools digital
capacity aims to guarantee that no school is left behind and reach those with more room for
improvement, it becomes essential to know more about the mechanisms that lead to schools
participation and how these interventions are influencing the digital capacity divide among
schools.
PURPOSES OF THE STUDY
Following the research line analysing the determinants of self- evaluation exercises and the
role of public interventions on enhancing/diminishing inequality, this paper focuses on digital
education and presents a case study from Spain focussing on promoting SELFIE, a diag-
nostic tool that aims to inform and promote self- evaluation exercises concerning a more
effective use of digital technology in education. The main aims of this paper are as follows:
• To determine whether the SELFIE tool attracts schools with low digital capacity and more
room for improvement.
• To determine whether the integration of the tool in broader school digitalisation pro-
grammes can affect the self- selection process in SELFIE use.
METHODS
Instrument
The study presented in this paper is based on data collected using SELFIE, an online tool
launched by the European Commission in October 2018 that aims to help schools diagnose,
reflect and take actions on their use of digital technologies in different areas. To do this, the
tool gathers the anonymous views of the whole school community— school leaders, teachers
and students— via validated questionnaires. Upon completing the SELFIE exercise, the tool
automatically generates an interactive online report only accessible by the school. This report
provides aggregated data with insights on the strengths and weaknesses of their use of digital
technologies for teaching and learning. The tool is based in the theoretical Digitally- Competent
Educational Organisations framework, also known as DigCompOrg (Kampylis et al., 2015).
SELFIE is available for primary (ISCED 1), lower- secondary (ISCED 2), upper- secondary
general (ISCED 3), upper- secondary vocational (ISCED 3— VET), and post- secondary non-
tertiary education levels (ISCED 4— PSNTE).
SELFIE questionnaires comprise a set of core items, which are mandatory for all schools
and some predefined optional questions, which the schools can choose from. In addition,
|
1943
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
schools can add up to ten specific questions to suit their own needs and context. Furthermore,
the questionnaires include some additional items about the use of digital technologies inside
and outside the school and a few demographic questions.
The questionnaires for school leaders and teachers are analogous. The core items are
structured in eight areas that are a development of the DigCompOrg areas and are theoret-
ically underpinned by the academic literature on the conditions for an effective use of digital
technologies in schools covered in point 2.31:
• A: Leadership
• B: Collaboration and networking
• C: Infrastructure and equipment
• D: Continuing professional development
• E: Pedagogy— support and resources
• F: Pedagogy— implementation in the classroom
• G: Assessment practices
• H: Student digital competence
Students over 9 years of age can participate with an easier and shorter version of the
questionnaire. Full list of items used in SELFIE are presented in online Appendix (part A).
Psychometric analyses have confirmed the robustness of the core items for the group of
self- selected schools (see Costa et al., 2021, and more specific details in online Appendix parts
B, C and D). These core items are intended to measure different dimensions of digital capacity
and are the focus of this paper. The questionnaire's core items are composed of statements
with five answer options (from 1: Strongly disagree— In my experience, this is not true at all— to
5: Strongly agree— In my experience, this is very true). Respondents also have the choice in all
items to opt- out by selecting the “Not applicable” or “Prefer not to say” answer options.
Data
The analysis performed in this paper used a unique data set of SELFIE application that was
collected in Spain between April 2019 and March 2020. All schools participating in this study
completed the SELFIE questionnaires simultaneously during this period. However, they did
it in two different settings, which allowed the generation of two groups of data that could be
compared with regards to our research questions. The groups are as follows:
• Self- selection group: schools decided to participate without intervention by researchers as
part of a voluntary self- evaluation exercise.
• Representative random sample schools and respondents within the school were ran-
domly selected for participation by researchers and the participation was made manda-
tory by Spanish Ministry of Education to guarantee the quality of the sample. The selection
was part of a study aimed at measuring the average digital capacity of Spanish schools
(Castaño Muñoz et al., 2021).
Moreover, to have finer analysis and to control for possible biases and respond to the
second purpose of the study, regions in Spain were split into two groups according to the
regional governmental level of support for SELFIE implementation.
• Regular group: comprised Spanish regions (all except two) where the tool was available,
but regional governments employed no special political methods to encourage schools to
participate.
1944
|
CASTA ÑO MUÑOZ et al.
• Prompting group: includes two Spanish regions where schools were actively encouraged
by regional government (but not forced) to participate in SELFIE. These regions followed
a parallel approach and the same methods to encourage participation. They integrated
the tool in broader voluntary school digitalisation programmes covering schools receiv-
ing public funds (public or charter) and without other selection procedures: all interested
schools that applied were accepted. These programmes were disseminated through the
regional education administration websites and their government- schools communication
channels. The programmes aimed to help schools to develop and improve an initial dig-
italisation action plan that had to be proposed by the schools. The use of SELFIE was
strongly recommended in the first steps of the process to diagnose the school's digital
capacity, the weaknesses and strengths, but was not mandatory. Successful participation
in the programme was associated with various incentives for the school. These covered
the opportunity to participate in more advanced programmes in ICT and innovation, pref-
erence in the access to specific teaching and learning resources, preference for partic-
ipation in continuous professional development activities, institutional advice, facilitation
of the development and implementation of the digital action plan, and (soft) official school
recognition (official banner to be shown on the school website and social media). In some
of the prompting regions, participation in the programme can also involve some individual
incentives such as career enhancement credits for teachers and school leaders. These
programmes have been shown to be effective for upscaling participation in SELFIE:
since the launch until the current date, 58% of schools participating in SELFIE come from
prompting regions while these regions account only for 28% of the schools in Spain.
In this paper, we restricted our analysis to data from primary (ISCED 1), lower- secondary
(ISCED 2), and upper- secondary general (ISCED 3) schools since they are the levels for
which data from the representative sample is available. As we can confirm that results are
not driven by a specific ISCED level (see online Appendix F) despite the different uses
of digital technologies in different school phases, we used the pooled data of these three
ISCED levels as main source for our analysis. For students, only ISCED 2 and 3 were anal-
ysed, because the student questionnaire for ISCED 1 is much shorter and has no questions
in some of the areas such as assessment practices. Moreover, we excluded some schools
for which the information was not of enough quality for our purpose (see note 1 in Table 1).
The number of respondents and schools that participated in this exercise is presented in
Table 1.
Table 2 presents the descriptive statistics of schools used in the analysis. The regular
numbers in the cells in the table present raw statistic while numbers in parentheses are
the results after weighting. Sample weights were used for the randomly selected group to
adjust the sample to the characteristics of the population of Spanish schools, teachers and
TAB LE 1 Group sizes with number of respondents and schools. Students only include lower and upper
secondary (ISCED 2 and 3)
Group sizes (number of
schools)
Sample Self- selected
Regular Prompting Regular Prompting
School leader 1288 (362) 43 3 (127) 846 (267) 704 (233)
Teacher 5940 (364) 1990 (127) 4889 (333) 40 6 0 (271)
Student 8457 (222) 2727 (79) 12,495 (138) 13,134 (119)
Note: To guarantee a minimum participation rate and comparability across groups within the school rate, only schools with at
least ten respondents that come from two different respondent groups have been included. This is why the number of schools
does not match exactly across school leaders and teachers. If we focus on students, the exclusion of primar y (ISCED 1) also
plays a role.
|
1945
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
students. Ad hoc weights were applied to the self- selected group that fix the proportion
of ISCED schools in each group to be the same as in the population. As Table 2 shows,
there is no balanced distribution across levels before weighting in the self- selection group.
Therefore, this procedure was necessary to ensure that school differences were focussed
on in terms of digital capacity avoiding school level playing a confusing role.
Descriptive statistics also show that our divide for regular and prompting regions is valid.
In the prompting regions, information about the programme was given by the educational
authorities to schools more than three times more often than in regular regions, that is to
say, in half of the cases.
Empirical methodology
We used multiple group confirmatory factor analysis (MG- CFA) to estimate the levels of
digital capacities measured by different respondent groups in each of the eight SELFIE
areas (factors) with unbiased standard errors (Brown, 2015). Previous psychometric analy-
sis indicate that digital capacities as measured by SELFIE are highly reliable and could be
comparable across investigated groups. Detailed psychometric analysis can be found in the
Appendix including validation of the theoretical dimensionality structure of SELFIE (part B),
evaluation of the quality of indicators used in the tool for both groups (part C) and assess-
ment of the measurement invariance (part D).
TAB LE 2 Descriptive characteristics of respondents (in raw percentage with weighted percentage in
brackets)
Per cent in group (Per cent
weighted)
Sample Self- selected
Regular Prompting Regular Prompting
School leaders
Primary (ISCED 1) 34.8 (48.4) 36.0 (48.8) 48.1 (48.4) 47. 2 (48.8)
Lower- secondary (ISCED 2) 32 .1 (25. 9) 32 .1 (25.1) 43.0 (25.9) 3 8.3 (2 5 .1)
Upper- secondary (ISCED 3)33.2 (25.6) 31.9 (26.0) 08.9 (25.6) 14.5 (26.0)
Public 69.6 (62.6) 73.2 (70.2) 6 5.9 (6 6 .1) 81.0 (80.3)
Selfie awareness from
authorities
– – 17.0 (19.0) 50.4 (50.6)
Tea che r s
Primary (ISCED 1)34.1 (45.4) 32.3 (43.7) 52.9 (45.4) 47. 3 (43.7)
Lower- secondary (ISCED 2) 33.2 (27. 3) 34.6 (2 7. 5) 39. 6 ( 2 7. 2) 40.3 (27.5)
Upper- secondary (ISCED 3) 32.7 (27.4) 3 3 .1 (2 8 . 8) 07. 5 ( 2 7. 4) 12.5 (28.8)
Public 67.0(70.8) 76.5 (77.9) 7 3 .1 (72. 7) 82.7 (81.7)
Selfie awareness from
authorities
– – 18.7 (19.9) 50.7 (53.3)
Students
Lower- secondary (ISCED 2) 59.5 (59.1) 52.8 (58.0) 85 .7 ( 5 9.1) 82.4 (58.0)
Upper- secondary (ISCED 3) 40.4 (40.9) 4 7. 2 (42.0) 14.3 (40.9) 17.5 (5 8.0)
Public 60.6 (65.4) 74. 3 ( 73.1) 53.5 (49.0) 69.3 (78.0)
Selfie awareness from
authorities
– – 14.5 (16.8) 55.8 (56.4)
1946
|
CASTA ÑO MUÑOZ et al.
In order to assess the self- selection bias that may exist in the use of the SELFIE and to
explore if it increases/decreases the digital capacity divide between schools, we compare
the estimated levels in the SELFIE- area indicators (factors) of self- selected versus randomly
selected schools. Moreover, to analyse the effect of public interventions in this difference,
we separate this comparison between regular and prompting regions.
The values from the randomly selected group provide a good reference of what would
have been the use of SELFIE in the absence of the self- selection bias related to digital ca-
pacity. Therefore, the statistical procedure implemented was simple: the means of the eight
SELFIE areas across self- selected groups were compared with the means of the randomly
selected groups establishing a 95% confidence interval (95CI) and using robust standard
errors that accounts for the nested structure of the sample. When values from the self-
selected group are significantly higher than values from the randomly selected one, it can be
said that the use of SELFIE is increasing the digital capacity gap (causing a Matthew effect).
On the contrary, there is an equaliser effect when values of the self- selected group are lower
than the reference group.
In each comparison, the values of each area (factor) for the reference group (the group
randomly selected) were set to have a mean of zero and standard deviation of one while the
mean and standard deviation for the comparison group (self- selected group) was freely es-
timated using weight to guarantee the same proportion of schools in each ISCED level as in
the sample data. The results are presented on a standardised metric and can be interpreted
as standardised effect sizes directly referring to Cohen's d values.
In order to discard the possibility that our results are driven by the results in a specific
school level, we checked differences in the results by ISCED level (see online Appendix part
F). Since the results are almost the same, all analyses presented in the main body of the
paper were performed on pooled data from all school phases to increase the power of our
analysis. However, regular and prompting regions were analysed separately.
R- based lavaan software (Rosseel, 2012) was used in all of the analysis. We used full
information maximum likelihood estimation to account for missing data and standard ro-
bust errors to account for the nested structure of data (respondents nested in schools).
Responses to the SELFIE tool were treated as continuous indicators.
RES U LTS
The latent means of all dimensions of digital capacity, as measured by SELFIE, were com-
pared for school leaders, teachers and students. The bars represent the difference of means
between the self- selected and reference group (representative sample) with 95% confidence
intervals (CI). The results for regular and prompting regions for all school phases (ISCED
levels) together are shown separately. More extensive analysis (see online Appendix part F)
showed that patterns found in the analysis presented in this paper are virtually the same for
every ISCED level. Therefore, it was decided to present results on an aggregate basis that
give us much more power due to larger group sizes.
Focussing on school leaders (Figure 1), a clear selection pattern emerges. In regular
regions, school leaders from self- selected schools have significantly lower values for digital
capacity in 5 out of 8 dimensions and lower for the other 3 (Leadership, Infrastructure, and
Pedagogy- support and resources) but not statistically significant at 95%. On the other hand,
the direction was exactly the opposite in prompting regions. Self- selected schools show
higher digital capacity in 7 out of 8 dimensions and were also higher in the assessment
practices dimension although the difference is not statistically significant.
Analogue comparisons were made for teachers and presented in Figure 2. Although the
effect sizes are smaller is some dimensions, the general trend in all of them is very similar to
|
1947
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
the detected for school leaders. Teachers perceive that, in regular regions, digital capacity
of schools in the self- selected group is significantly lower than population averages in three
dimensions: Pedagogy- implementation in the classroom, Student digital competence, and
Pedagogy– support and resources. It is reassuring to see that this negative difference was
also perceived by school leaders in the two first dimensions, and it was significant at 90%
in the third. On the other hand, as with school leaders, according to teachers, the direction
of self- selection effect in prompting regions is opposite to regular regions. In prompting re-
gions, self- selected schools have on average higher digital capacity on all dimensions.
Finally, Figure 3 reports the differences in digital capacity of schools as reported by stu-
dents. Focussing on regular regions, a similar trend to the one detected when analysing
school leaders and teachers is observed. Students in schools from the self- selected group
give lower values than students in schools selected randomly. However, only a small signifi-
cant effect is found in two of the four dimensions covered: Student Digital Competence and
Infrastructure and Equipment, although this last only accepting a 90% of confidence level.
FIGURE 1 School leader views on differences in digital capacity between schools in the self- selected
group and schools in the randomly selected group (reference). Values by factor and type of region. (i) Reported
values are factor means with 95% CI. (ii) CI higher than 0.4 were truncated
FIGURE 2 Teacher views on differences in digital capacity between schools in the self- selected group and
schools in the randomly selected group (reference). Values by factor and type of region. (i) Reported values are
factors means with 95% CI. (ii) CI higher than 0.4 were truncated
1948
|
CASTA ÑO MUÑOZ et al.
On the other hand, there is not a clear trend in prompting regions, and this is the only case
where we could not find statistical differences between self- selected schools and schools
from the representative sample.
In assessing the results from the three respondents groups together, it can be concluded
that in general, the schools in regular regions that decide to participate in SELFIE have
significantly lower digital capacity values than the average Spanish school (evidence for
equalising effect). However, this selection trend changes in prompting regions, where the re-
gional authorities have integrated SELFIE into broader digitalisation programmes (evidence
for Matthew effect). This overall pattern is robust and also holds when public and private
schools are analysed separately except for students in public schools in prompting regions
(see part E of the online Appendix for details).
The results presented so far suggest that the involvement of educational authorities could
attract schools with higher digital capacity. However, it cannot be ruled out that prompting
regions are somehow special, and this effect is not driven by the fact that the authorities
are involved. To check this assumption, we focus on prompting regions and compare the
digital capacity of schools in these regions that declared that information about SELFIE was
provided by educational authorities with schools in the same regions that have information
about SELFIE from another source. The results are presented in Figure 4.
In prompting regions, schools informed by educational authorities that participated in
SELFIE have higher digital capacity in all dimensions according to school leaders and
teachers. Less clear results are found based on student responses where no significant
differences are found. Focussing on the two first groups, the strongest differences (signifi-
cant at 95% confidence level) are visible for Leadership and Collaboration and Networking.
Moreover, school leaders report significant differences in Assessment Practices, and teach-
ers in Continuous Professional Development. These results confirm that the involvement of
educational authorities incorporating SELFIE in broader programmes could have affected
the self- selection process towards the more digitally capable schools.
CONCLUSIONS
Participation in self- evaluation exercises is considered effective for school development.
Consequently, it may be beneficial in the development of digital capacity in schools. Under
FIGURE 3 Student views on differences in digital capacity between schools in the self- selected group and
schools in the randomly selected group (reference). Values by factor and type of region. (i) Reported values are
factors means with 95% CI. (ii) CI higher than 0.4 were truncated
|
1949
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
this assumption, public bodies are promoting the creation and use of tools that inform, sup-
port and prompt schools self- evaluation exercises on this topic. However, little is known
about the participation patterns in this type of interventions. If digitally advanced schools
adopt these tools more, there is a risk of increasing the digital gap and leaving behind the
schools that need it more (Matthew effect). Conversely, if schools that lag behind in digitali-
sation adopt them to a greater extent, these tools can have an equalising component.
This paper has presented evidence on how Spanish schools adopt one of these tools:
SELFIE. The results support the idea that the use of SELFIE can have an equalising ef-
fect since schools with lower levels of digital capacity tend to use it more in regions where
SELFIE is not part of broader policy interventions. This result confirms the importance of the
need for digital improvement over other possible inhibiting factors associated to low digital
capable schools in Spain.
However, the results also show that, under certain conditions, there is a risk that SELFIE
more often reached schools with high digital capacity. It has been shown that this happened
when regional public authorities integrate the tool into broader digitalisation programmes
(prompting regions) that have some features that may attract more digitally capable schools.
First, SELFIE was integrated as a diagnostic tool into programmes that help in developing
digitalisation plans, and consequently, schools that already had developing those plans in
mind may be more attracted. Second, SELFIE was incorporated into basic programmes
in which schools are required to participate before moving on to advanced digitalisation
FIGURE 4 Views of school leaders, teachers, and students on differences in digital capacity between
schools informed about SELFIE by educational authorities and schools informed via other sources. Only
prompting regions. (i) Reported values are factors means with 95% CI. (ii) CI higher than 0.4 were truncated
1950
|
CASTA ÑO MUÑOZ et al.
programmes. This feature can attract schools that have already enough digital capacity to
be interested in participating in the second step. Third, public authorities offered formal rec-
ognition for participation, which can attract schools that already have good digital capacity
and want official recognition to show to the external world. Finally, in some cases, teachers
and school leaders can obtain some individual incentives such as career enhancement
credits, and some schools with already advanced digital capacity may have seen these pro-
grammes as an easy way to achieve them.
The interpretation of the results presented is based on two main assumptions. Firstly,
the self- selection effect of participation on SELFIE is measured assuming that it can lead to
complete self- evaluation exercises. While the use of the tool does not automatically entail an
improvement if no further actions are taken, it is assumed that it is a first step and good indi-
cator of interest on change and development of further self- evaluation exercise. Second, as
pointed out by previous studies, the results are interpreted assuming (as policy interventions
do) that self- evaluation exercises are efficient in transforming the school and have a posi-
tive impact on its digital capacity. Both are very plausible assumptions. However, if it were
not the case, the different patterns of participation in SELFIE would not have any impact in
either self- evaluation participation or in generating a Matthew effect. Future research using
longitudinal data could confirm these two assumptions.
The results presented are robust for all educational levels and separating public and
private schools (in the latter case, the evidence suggesting the association between public
incorporation of SELFIE in broader programmes and the high level of digital capacity of
schools using SELFIE is especially robust). The results also stand when the effect of public
involvement is examined in- depth: schools participating in SELFIE that were informed about
the tool by regional governments from prompting regions (and therefore most likely to ob-
tain the information as part of the dissemination campaign of a wider public programme on
digitalisation), have higher digital capacity than schools in the same regions who discover
SELFIE by other means.
Despite their robustness, the results also have some limitations. The first limitation is that
evidence is presented from case study research in a single country, and generalisability of
the results in other countries and other policy interventions cannot be guaranteed. However,
the results can orient future research and policies in other contexts. A second limitation is
that measurement error cannot be disregarded completely. Even though we have proved the
robustness of the SELFIE tool in psychometric terms and have shown the appropriateness
of the items for different types of schools (self- selected and randomly selected), problems
are still possible such as different motivation or social desirability of responses (Faddar
et al., 2018). Therefore, it is reassuring that our results are highly consistent using school
leaders and teacher questionnaires data. However, students' results are a little more ambig-
uous, and although they mainly support the existence of an equalising component in regular
regions, data from students do not show the generation of Matthew effect in prompting
regions where SELFIE was part of a broader digitalisation programme. In line with previous
literature, one explanation for this weaker effect among students can be that students are
less reflective or cognitively process the items differently from teachers and school leaders
(Faddar et al., 2017). However, we cannot discard alternative explanations such as different
levels of awareness about the purpose of digital technologies use or different level of interest
in some of the areas. An in depth analysis of the causes of different values between stu-
dents and other respondent types could be an useful research avenue for future studies on
schools self- reflection instruments as it would inform the quality of its results.
Our main conclusion is that the policy interventions might be a good way of upscaling and
extending self- reflection exercises on school digital capacity in schools. However, they might
sometimes do it in ways that are not intended. For instance, they can create inequalities among
schools and leave those schools with more room for improvement behind. Policy interventions
|
1951
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
should be very carefully designed if the aim is to reduce inequalities. They need to plan their
strategy for dissemination, communication and participation incentives in an inclusive way to
reach schools that need it more and consider the risk of Matthew effect generation.
ACKNOWLEDGEMENTS
This research has been funded by the European Commission. The authors thank the col-
leagues who provided feedback on early versions of the paper: Panagiotis Kampylis and
Patrícia Costa.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest and they do not have competing
interests.
ETHICS STATEMENT
Data collection was approved by the Data Protection Office of the Joint Research Centre. It
complies with all legal and ethical requirements. Participants to this research have declared
their informed consent. During the analysis, data has been anonymised to guarantee privacy
and anonymity.
DISCLAIMER
The views expressed in this article are purely those of the authors and should not be re-
garded as the official position of the European Commission.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are stored by the European Commission. The
authors were granted access to the data for this research. Restrictions apply to the availabil-
ity of these data. The datasets used follow the data policies and legislation of the European
Commission and SELFIE tool.
ENDNOTE
1 While the initial DigCompOrg framework areas were initially validated through literature review and expert
judgement the nal reorganisation in eight areas in SELFIE is underpinned by subsequent psychometric
analysis (See Costa et al., 2021). The relationships between the literature review in point 2.3, DigCompOrg and
SELFIE areas are as follows:
● Organisational and leadership aspects dimension in point 2.3 relates to “Leadership and governance
practices” in DigCompOrg and to area A in SELFIE.
● Internal and external community and network building dimension in point 2.3 relates to “Collaboration and
networking” in DigCompOrg and to area B in SELFIE.
● Technical and infrastructural theoretical dimension in point 2.3 relates to “Infrastructure” in DigCompOrg
and to area C in SELFIE.
● Access to professional development theoretical dimension in point 2.3 relates to “Professional develop-
ment” in DigCompOrg and to area D in SELFIE.
● Techno- pedagogical knowledge- preparing lessons theoretical dimension in point 2.3 relates to “Teaching
and learning practices” in DigCompOrg and to area E in SELFIE.
● Techno- pedagogical knowledge- teach in effective way theoretical dimension in point 2.3 relates to
“Teaching and learning practices” in DigComp Org and to area F in SELFIE.
● Techno- pedagogical knowledge innovate in the feedback and assessment processes theoretical dimen-
sion in point 2.3 relates to “Assessment practices” in DigCompOrg and to area G in SELFIE.
● Development of the digital competence of students theoretical dimension in point 2.3 relates to a part of
“Teaching and learning practices” and to area H in SELFIE.
1952
|
CASTA ÑO MUÑOZ et al.
REFERENCES
Antoniou, P., Myburgh- Louw, J., & Gronn, P. (2016). School self- evaluation for school improvement: Examining
the measuring properties of the LEAD surveys. Australian Journal of Education, 60(3), 191– 210.
Attewell, P. (2001). Comment: The first and second digital divides. Sociology of Education, 74(3), 252– 259. https://
doi.org/10.2307/2673277
Balaban, I., Redj ep, N. B., & Calopa, M. K. (2018). The analysis of digital matur ity of schools in Croa tia. International
Journal of Emerging Technologies in Learning, 13(6), 4– 15. https://doi.org/10.3991/ijet.v13i0 6.78 44
Beardsley, M., Albó, L., Aragón, P., & Hernández- Leo, D. (2021). Emergency education effects on teacher abilities
and motivation to use digital technologies. British Journal of Educational Technology, 52, 14 55 – 1477. https://
do i .org/10.1111 /bjet .13101
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009).
A meta- analysis of three types of interaction treatments in distance education. Review of Educational
Research, 79(3), 1243– 1289. https://doi.org/10.3102/00346 54309 333844
Bonoli, G., & Liechti, F. (2018). Good intentions and Matthew effects: Access biases in participation in active
labour market policies. Journal of European Public Policy, 25(6), 894– 911. https://doi.org/10.1080/13501
763.2017.1401105
Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford Publications.
Caputo, A., & Rastelli, V. (2014). School improvement plans and student achievement: Preliminary evidence from
the Quality and Merit Project in Italy. Improving Schools, 17(1), 72– 98. https://doi.org/10.1177/13654 80213
5158 0 0
Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for Citizens with
eight proficiency levels and examples of use. Publications Office of the European Union.
Castaño Muñoz, J., Weikert Garcia, L., & Herrero Ramila, C. (2021). Analysing the digital capacity of Spanish
schools using SELFIE. Publications Office of the European Union.
Castaño- Muñoz, J., Kreijns, K., Kalz, M., & Punie, Y. (2017). Does digital competence and occupational set-
ting influence MOOC participation? Evidence from a cross- course survey. Journal of Computing in Higher
Education, 29, 28– 46. ht tps://doi.org/10.1007/s1252 8- 016 - 9123- z
Chang, I.- H. (2012). The effect of principals' technological leadership on teachers' technological literacy and teach-
ing effectiveness in Taiwanese elementary schools. Education Technology and Society, 15(2), 328 – 340.
Chapman, C., & Sammons, P. (2013). School self- evaluation for school improvement: What works and why?. CfBT
Education Trust.
Cheng, L., Ritzhaupt, A. D., & Antonenko, P. (2018). Effects of the flipped classroom instructional strategy on stu-
dents' learning outcomes: A meta- analysis. Educational Technology Research & Development, 67, 793– 824.
https://doi.org/10.1007/s1142 3- 018- 9633- 7
Cosner, S. (2011). Teacher learning, instructional considerations and principal communication: Lessons from
a longitudinal study of collaborative data use by teachers. Educational Management Administration &
Leadership, 39(5), 568– 589. h t t ps: / /doi .org/10.1177/17411 432 11 40 8 45 3
Costa, P., Castaño- Muñoz, J., & Kampylis, P. (2021). Capturing schools' digital capacity: Psychometric analy-
ses of the SELFIE self- reflection tool. Computers & Education, 162, 104080. https://doi.org/10.1016/j.compe
du.2020.104080
Croxford, L., Grek, S., & Shaik, F. J. (2009). Quality assurance and evaluation (QAE) in Scotland: Promoting
self- evaluation within and beyond the country. Journal of Education Policy, 24(2), 179– 193. https://doi.
org/10.1080/02680 93090 2734095
de Koning, B. B., Hoogerheide, V., & Boucheix, J.- M. (2018). Developments and trends in learning with instruc-
tional video. Computers in Human Behavior, 89, 395– 398. https://doi.org/10.1016/j.chb.2018.08.055
Devos, G., & Verhoeven, J. E. (2003). School self- evaluation: Conditions and caveats: The case of second-
ary schools. Educational Management Administration & Leadership, 31(4), 403– 420. https://doi.org/
10.1177/02632 11X03 0314005
Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education technology: An evidence- based review.
Retrieved 12 September 2019, from http://www.nber.org/paper s/w23744
European Commission. (2020). Supporting school self- evaluation and development through quality assurance
policies: Key considerations for policy makers. Report by ET2020 Working Group Schools. Retrieved
20 December 2020, from https://euage nda.eu/uploa d/publi catio ns/suppo rting - schoo l- self- evalu ation -
and- devel opment.pdf
Faddar, J., Vanhoof, J., & De Maeyer, S. (2017). Instruments for school self- evaluation: Lost in translation? A
study on respondents' cognitive processing. Educational Assessment, Evaluation and Accountability, 29,
397– 420. ht tps://doi.org/10.1007/s1109 2- 017- 9270- 4
Faddar, J., Vanhoof, J., & De Maeyer, S. (2018). School self- evaluation: Self- perception or self- deception? The
impact of motivation and socially desirable responding on self- evaluation results. School Ef fectiveness and
School Improvement, 29(4), 660– 678. https://doi.org/10.1080/09243 453.2018.1504802
|
1953
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
Fernández- Batanero, J. M., Montenegro- Rueda, M., Fernández- Cerero, J., & García- Martínez, I. (2020).
Digital competences for teacher professional development. Systematic review. European Journal of
Teacher Education. Retrieved 20 December 2020, from https://www.tandfonline.com/doi/full/https://doi.
org/10.1080/02619 768.2020.1827389
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (2019). Preparing for life in a digital world— IEA
International Computer and Information Literacy Study 2018 International Report. Retrieved 09 November
2020, from https://bit.ly/2rWbGM9
Fullan, M., & Watson, N. (2000). School- based management: Reconceptualising to improve learning outcomes.
School Ef fectiveness and School Improvement, 11(4), 4 53– 473.
Fyfe, E. R. (2016). Providing feedback on computer- based algebra homework in middle- school classrooms.
Computers in Human Behavior, 63, 568– 574. https://doi.org/10.1016/j.chb.2016.05.082
Grek, S., Lawn, M., Ozga, J., & Segerholm, C. (2013). Governing by inspection? European inspectorates and
the creation of a European education policy space. Comparative Education, 49(4), 486– 502. https://doi.
org/10.1080/03050 068.2013.787697
Hall, C., & Noyes, A. (2009). New regimes of truth: The impact of performative school self- evaluation systems on
teachers' professional identities. Teaching and Teacher Education, 25(6), 850– 856. https://doi.org/10.1016/j.
tate.2009.01.008
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81– 112. https://
doi.org/10.3102/00346 54302 98487
Heppen, J. B., Walters, K., Clements, M., Faria, A . M., Tobey, C., Sorensen, N., & Culp, K. (2012). Access to Algebra
I: The effects of online mathematics for grade 8 students. NCEE 2012- 4021. U.S. Department of Education,
Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Hofman, R., Dijkstra, N. J., & Hofman, V. H. A. (2009). School self- evaluation and student achievement. School
Effecctivenness and School Improvement, 20(1), 47– 68. https://doi.org/10.1080/09243 45080 2664115
Hohlfeld, T. N., Ritzhaupt, A. D., Barron, A., & Kemkerhon, K. (2008). Examining the digital divide in K- 12 public
schools: Four- year trends for supporting ICT literacy in Florida. Computers & Education, 51(4), 1648– 1663.
https://doi.org/10.1016/j.compe du.2008.04.002
Høyrup, S. (2004). Reflection as a core process in organisational learning. Journal of Workplace Learning, 16(8),
442– 454. https://doi.org/10.1108/13665 62041 0566414
Kampylis, P., Devine, J., Punie, Y., & Newman, T. (2016). Supporting schools to go digital: From a conceptual
model towards the design of a self- assessment tool for digital- age learning. In L. Gómez Chova, A. López
Martínez, & I. Candel Torres (Eds.), Proceedings of the 9th International Conference of Education, Research
and Innovation (pp. 816– 825). IATED Academy.
Kampylis, P., Punie, Y., & Devine, J. (2015). Promoting effective digital- age learning— A European framework for
digitally- competent educational organisations. Publications Office of the European Union.
Kapsalis, G., Ferrari, A., Punie, Y., Conrads, J., Collado, A., Hotulainen, R., Rama, I., Nyman, L., Oinas, S., &
Ilsley, P. (2019). Evidence of innovative assessment: Literature review and case studies. Publications Office
of the European Union, Luxembourg, https://doi.org/10.2760/552774, JRC 118113
Kerckhoff, A., & Glennie, E. (1999). The Matthew effect in American Education. JAI press.
Kong, S. C. (2019). Partnership among schools in e- learning implementation: Implications on elements for sus-
tainable development. Educational Technology & Society, 22(1), 28– 43.
Kozma, R. B. (2008). Technology, innovation and educational change. A global perspective. International Society
for Technology in Education.
Lachner, A., Backfisch, I., & Stürmer, K. (2019). A test- based approach of modeling and measuring technological
pedagogical knowledge. Computers & Education, 142. https://www.scien cedir ect.com/scien ce/artic le/pii/
S0360 13151 93 01988
Littlejohn, A., Hood, N., Milligan, C., & Mustain, P. (2016). Learning in MOOCs: Motivations and self-
regulated learning in MOOCs. The Internet and Higher Education, 29, 40– 48. https://doi.org/10.1016/j.
iheduc.2015.12.003
Loeb, S., Kalogrides, K., & Horng, E. (2010). Principal preferences and the uneven distribution of principals
across schools. Education Evaluation and Policy Analysis, 32(2), 205 – 229. https://doi.org/10.3102/01623
73710 369 833
Marsh, J., McCombs, J. S., & Martorell, F. (2010). How instructional coaches support data- driven decision mak-
ing: Policy implementation and effects in Florida middle schools. Educational Policy, 24(6), 872– 907. https://
doi.org/10.1177/08959 04809 341467
McLaren, B. M., van Gog, T., Ganoe, C., Karabinos, M., & Yaron, D. (2016). The efficiency of worked examples
compared to erroneous examples, tutored problem solving, and problem solving in computer- based learning
environments. Computers in Human Behavior, 55, 87– 99. https://doi.org/10.1016/j.chb.2015.08.038
Meneses, J., Fabregues, S., Rodríguez- Gomez, I., & Ion, G. (2012). Internet in teachers’ professional practice
outside the classroom: Examining supportive and management uses in primary and secondary schools.
Computers & Education, 59(3), 915– 924. https://doi.org/10.1016/j.compe du.2012.04.011
1954
|
CASTA ÑO MUÑOZ et al.
Mingo, I., & Bracciale, R. (2016). The Matthew effect in the Italian Digital Context: The progressive marginalisation
of the “poor ”. Social Indicators Research, 135, 629 – 659. https://doi.org/10.1007/s1120 5- 016 - 1511- 2
Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational Psychology Review,
19(3), 309– 326. https://doi.org/10.1007/s1064 8- 007- 9047- 2
Nevo, D. (2001). School evaluation: Internal or external? Studies in Educational Evaluation, 27, 95– 106. https://
doi.org/10.1016/S0191 - 491X(01)00016 - 5
Ottesttad, G. (2013). School leadership for ICT and teachers' use of digital tools. Nordic Journal of Digital Literacy,
8(1), 107– 125.
Pavolini, E., & van Lancker, W. (2018). The Matthew effect in childcare use: A matter of policies or preferences?
Journal of European Public Policy, 25(6), 878 – 893. https://doi.org/10.1080/13501 763.2017.1401108
Perc, M. (2014). The Matthew effect in empirical data. Journal of the Royal Society Interface, 11, 1– 15. https://doi.
org/10.1098/rsif.2014.0378#d3e1840
Pisoni, D. (2018). Between idealism and pragmatism: Social policies and Matthew effect in vocational ed-
ucation and training for disadvantaged youth in Switzerland. Social Inclusion, 6(3), 289 – 300. https://doi.
or g / 10.17645/ s i .v 6 i 3 .1515
Reich, J. (2020). Failure to disrupt, why technology alone can't transform education. Harvard University Press.
Rigney, D. (2010). The Matthew effect: How advantage begets further advantage. Columbia University Press.
https://doi.org/10.7312/rign1 4948
Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2),
1– 36.
Schildkamp, K., & Visscher, A. (2009). Factors influencing the utilisation of a school self- evaluation instrument.
Studies in Educational Evaluation, 35(4), 150– 159. https://doi.org/10.1016/j.stued uc.2009.12.001
Surma, T., & Kirschner, P. (2020). Technology enhanced distance learning should not forget how learning hap-
pens. Computers in Human Behavior, 110, 106390. https://doi.org/10.1016/j.chb.2020.106390
Tondeur, J., Valcke, M., & van Braak, J. (2012). A multidimensional approach to determinants of computer use
in primary education: Teacher and school characteristics. Journal of Computer Assisted Learning, 24,
494– 506.
van Alten, D., Phielix, C., Janssen, J., & Kester, L. (2019). Effects of flipping the classroom on learning out-
comes and satisfaction: A meta- analysis. Educational Research Review, 28, 1– 18. ht tps://doi.org/10.1016/j.
edurev.2019.05.003
van Dijk, J., & van Deursen, A. (Eds.) (2014). Solutions: Learning digital skills. In Digital skills: Unlocking the infor-
mation society (pp. 113– 138). Palgrave Macmillan.
Vanderlinde, R., & van Braak, J. (2010). The e- capacity of primary schools: Development of a conceptual model
and scale construction from a school improvement perspective. Computers & Education, 55, 541– 553.
https://doi.org/10.1016/j.compe du.2010.02.016
Vanhoof, J., & Petegem, V. (2011). Designing and evaluating the process of school self- evaluations. Improving
Schools, 14(2), 200– 212.
Vanhoof, J., Van Petegem, P., & De Maeyer, S. (2009). Attitudes towards school self- evaluation. Studies in
Educational Evaluation, 35(1), 21– 28. https://doi.org/10.1016/j.stued uc.2009.01.004
Vanhoof, J., Van Petegem, P., Verhoeven, J. C., & Buvens, I. (2009). Linking the policymaking capacities of
schools and the quality of school self- evaluations: The view of school leaders. Educational Management
Administration and Leadership, 37(5), 667– 686.
Vázquez, M. I., & Gairín, J. (2014). Institutional self- evaluation and change in educational centres. International
Journal of Leadership in Education, 17(3), 327– 352. https://doi.org/10.1080/13603 124.2013.812747
Voogt, J., Knezek, G., Cox, M., Knezek, D., & ten Brummelhuis, A. (2011). Under which conditions does ICT have
a positive effect on teaching and learning? A Call to Action. Journal of Computer Assisted Learning, 29(1),
4– 14. http s : //d oi.o r g /10 .1111/j.1365- 27 2 9. 2011.00453.x
Walber, H. J., & Sai, S. L. (1983). “Matthew” effects in education. American Educational Research Journal, 20(3),
359– 373.
Waycott, J., Bennett, S., Kennedy, G., Dalgarno, B., & Gray, K. (2010). Digital divides? Student and staff percep-
tions of information and communication technologies. Computers & Education, 54, 1202– 1211. https://doi.
org/10.1016/j.compe du.2009.11.006
Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and “real- time” policy
instruments. Journal of Education Policy, 31(2), 123 – 141. https://doi.org/10.1080/02680 939.2015.1035758
Yang, T. C., Cheng, M. C., & Chen, Y. (2018). The influences of self- regulated learning support and prior knowl-
edge on improving learning performance. Computers & Education, 126 , 37– 52. https://doi.org/10.1016/j.
compe du.2018.06.025
|
1955
FOR TO ALL THOSE WHO HAVE, WILL M ORE BE GIVEN?
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of the article at the
publisher’s website.
How to cite this article: Castaño Muñoz, J., Pokropek, A., & Weikert García, L.
(2022). For to all those who have, will more be given? Evidence from the adoption of
the SELFIE tool for the digital capacity of schools in Spain. British Journal of
Educational Technology, 53, 1937– 1955. h t tps://doi.or g /10.1111/bjet .1322 2