Content uploaded by Grzegorz Ptaszek
Author content
All content in this area was uploaded by Grzegorz Ptaszek on May 26, 2020
Content may be subject to copyright.
Media Literacy Outcomes, Measurement
GRZEGORZ PTASZEK
AGH University of Science and Technology, Poland
A key aspect of measurement of media literacy outcomes is to dene a measurement
area, that is what media literacy means. In recent years, the concept has undergone a
transformation under the inuence of cultural, technological, and social changes. In
literature, media literacy is understood dierently, depending on who denes it and for
what purpose (Arke & Primack, 2009; Calvani, Cartelli, Fini, & Ranieri, 2008; Hobbs &
Frost, 2003; Rosenbaum, 2007). Besides the concept of media literacy, there have oen
been equivalent terms, such as media and information literacy, digital literacy, digital
and media literacy, new media literacy, digital competencies, media literacy education,
and so on. Judith Rosenbaum (2007, p. 6) notices that the lack of unanimity among
researchersastowhatmedialiteracyismakesitdiculttodeterminewhatitmeansto
be media literate. Similarly, according to Monica Bulger (2012, p. 84), this ambiguity
of denition is a challenge to indicating the measurable dimensions of media literacy.
Many researchers underline that media literacy measurement is currently an
overwhelming challenge (Arke & Primack, 2009; Martens, 2010; Schilder, Lockee,
& Saxon, 2016). For a long time, it has been the eectiveness of education activities
that has been measured rather than media literacy outcomes (Literat, 2014). Schilder
et al. (2016, pp. 33–34), having reviewed approaches to the measurement of media
literacy in the literature, distinguished three dierent types. e rst one is a so-called
occasional assessment that measures separate constructs or outcomes that implicitly
refer to media literacy knowledge, skills, and attitudes, for example willingness to
use aggression, risk factors for eating disorders, or children’s food and vegetable
intake. e second one applies to the measurement of selected components of media
literacy such as perception of bias or perceived realism. e third type, in turn, is a
holistic measurement of media literacy outcomes in relation to a specied medium
(advertisements, television, social media, etc.).
Dierent approaches to understanding media literacy and the measurement itself
(how they relate to various measurement methods and tools used by researchers)
introduce certain diculties to media literacy as the eld of communication. ese
diculties are, among others things, associated with the fact that the outcomes of the
research cannot be compared to one another and the tools designed cannot be used
for other measurements. So, no wonder that in the study conducted by Schilder et al.
(2016, p. 38) as many as 46% of the 133 respondents (media literacy researchers) agreed
with the statement that media literacy outcomes are not explicitly dened, so it is not
clearwhatshouldbeassessed.Forthatmatter,theissuesrelatedtothemeasurement
of media literacy outcomes require serious scientic reection and such measurement
e International Encyclopedia of Media Literacy. Renee Hobbs and Paul Mihailidis (Editors-in-Chief ),
Gianna Cappello, Maria Ranieri, and Benjamin evenin (Associate Editors).
© 2019 John Wiley & Sons, Inc. Published 2019 by John Wiley & Sons, Inc.
DOI: 10.1002/9781118978238.ieml0103
2MEDIA LITERACY OUTCOMES,MEASUREMENT
process steps as the selection of a specied model of media literacy competencies,
distinguishing its dimensions (components) as well as dening performance indicators
allowing for proper planning and the design of a good measurement tool.
Media literacy frameworks
e measurement of media literacy outcomes involves the implementation of a dened
media literacy framework. In literature, one can encounter various models of media
literacy that—depending on how they are dened—consist of dierent elements. And
so the researchers distinguish three (Buckingham, 2005; Celot, 2009; UNESCO, 2013),
four (Calvani et al., 2008), ve (Arke & Primack, 2009; Hallaq, 2016; Hobbs, 2010),
seven (Potter, 2014), or 11 (Jenkins, Clinton, Purushotma, Robison, & Weigel, 2006)
key components or core skills of media literacy (see Table 1). While comparing various
models of media literacy, however, one notices the following similarities.
•A major component of media literacy constitutes critical thinking about a message
(information or text) and its evaluation.
•Technical skills play a vital role too.
•Skills related to media production creation are also relevant.
•Social functioning of an individual, including communication competencies and
a cooperation that uses new media and information communication technologies,
is the area of new competencies that is strictly related to interactivity and media
convergence.
Tab l e 1 Key components/core skills of media literacy in terms of dierent researchers.
ree Four Five Seven 11
Access,
understand,
create
(Buckingham,
2005)
Access, evaluation,
creation
(UNESCO, 2013)
Use, cr itical
understanding,
communicative
(Celot, 2009)
Techn o l o g i cal,
cognitive,
ethical, and
integrated
(Calvani et al.,
2008)
Recall, purpose,
viewpoint,
technique,
evaluation
(Arke & Primack,
2009)
Access, analyze &
evaluate, create,
reect, act
(Hobbs, 2010)
Media awareness,
media access,
ethical
awareness, media
evaluation, media
production
(Hallaq, 2016)
Analysis,
evaluation,
grouping,
induction,
deduction,
synthesis,
abstraction
(Potter, 2014)
Play, performance,
simulation,
appropriation,
multitasking,
distributed
cognition,
collective
intelligence,
judgment,
transmedia
navigation,
networking,
navigation
(Jenkins et al.,
2006)
MEDIA LITERACY OUTCOMES,MEASUREMENT 3
Media literacy researchers agree that media literacy is a multidimensional construct
because it includes, for example, cognitive, emotional, aesthetic, and moral dimensions
(Bulger & Livingstone, 2013; Calvani et al., 2008; Potter, 2014). Such an understanding
of media literacy enables the observation of dierent factors that have an inuence on
competency levels.
Distinguishing in each of the separate areas of media literacy the detailed competen-
cies and their performance criteria is an extremely complex and problematic issue. Such
an operation implies a priori resolution, with competencies in a given area recognized as
relevant from the media point of view of an individual function. e list of such detailed
competencies was elaborated independently by two groups of experts: from UNESCO
(2013) and from the European Union (Celot, 2009). e UNESCO experts—within
three components of media literacy: (i) access and retrieval, (ii) understanding and
evaluation, (iii) creation and sharing—distinguished 12 major media competencies.
As a result, a few indicators of performance criteria were formulated for each of the
competencies. A total of 113 performance criteria for all competencies were identi-
ed (UNESCO, 2013, pp. 129–136). e experts of the European Commission (Celot,
2009)—within three components of media literacy: (i) use, (ii) critical understanding,
and (iii) communicative—distinguished three components to each, a total of nine media
competencies, for which 36 performance criteria were formulated. As one can see, in
eachcasethenumberofcompetenciesandcriteriavariesduetothedegreeofdetailof
their description. is, in turn, shows that the distinguishing of universal competen-
cies of media literacy (those that can be measured in dierent sociocultural contexts) is
problematic.
Proficiency levels
To measure media literacy, it is worth remembering prociency levels. Media literacy
as a set of competencies is a process. is distinction results from an assumption that
media literacy is a peculiar continuum that changes together with our experience and
allows us to master our competencies (Potter, 2014). One cannot say that media literacy
is something stable and unchangeable. Because the dynamics of the media environment
are constantly changing, we must continually reassess our media prociency. e
distinguishing of competencies levels allows us to observe an overturning change.
Treating media literacy as a process entails some limitations related to measurement
because we always measure at a particular moment in time. Moreover, there are such
competencies that cannot be measured by commonly used questionnaire surveys,
because they reveal themselves at the moment of doing certain tasks, for example
competencies such as creation or cooperation with others.
Many works (Celot, 2009; Literat, 2014; Potter, 2014; UNESCO, 2013) distinguish
three levels of media literacy: basic (or minimum), intermediate (or medium), and
advanced (or maximum). ey require proper description, which is not an easy task.
is requires not only knowledge of the development opportunities of an individual
(particular levels should provide the character of cognitive, emotional, moral, and
social development) but also—if we want to adopt them in curricula—the provision
4MEDIA LITERACY OUTCOMES,MEASUREMENT
ofteachingcontentateverystageofeducation.eincreaseofmedialiteracyshould
be achieved spirally; rst, basic competencies must be mastered, in order to master
the next ones. Besides, media prociency should be a result of a functioning of an
individual within dierent components of media literacy. e description of media
literacy prociency proposed by UNESCO is presented in Table 2.
Methods and tools of measurement
A proper selection of research methods and measurement tools plays a vital role in the
measurement process of media literacy. erefore, quantitative or qualitative measures,
as well as various tools, could be applied, for example self-assessment questionnaires,
simulation or performance (task) tests, or an in-depth interview.
Avastmajorityofmeasurementsurveyscarriedoutsofarhaveaquantitativecharac-
ter and apply to selected areas of media literacy (e.g., a critical assessment of media con-
tent,involvement,searchingandassessmentofinformationcredibility,persuasion,etc.)
aswellastoselectedmedia(television,radio,socialmedia,theInternet)orformswithin
those media (e.g., advertisements, news). Self-assessment questionnaires are applied
there, so a respondent either assesses specied statements using a multipoint Likert
scale or chooses an answer from a list provided. ese are principally paper-and-pencil
questionnaires (less frequent are online ones), and they include from several to several
dozen statements. Despite the acceptable psychometric indicators, the questionable fact
is whether they really measure media literacy, due to its declarative nature. erefore,
the measurement is not fully objective: a respondent does not perform any practical
tasks that are assessed. is type of research is also much more encumbered by errors
resulting from respondents attempting to satisfy the expectations of a researcher. Some-
times open questions are also present in questionnaires. But they require, in the tool
design stage, the involvement in the research process of competent judges and the elab-
oration of precise criteria for answer assessment.
A majority of research concerning the measurement of media literacy is carried out
on a small number of samples and applies to selected competency aspects, which in turn
makes the whole picture incomplete. Such kinds of research are principally conducted
among children and youths, and least frequently among adults, including students. is
concentration of researchers on the measurement of media literacy outcomes among
children and youths results from the presence of media literacy (or media education) in
thecurriculaofprimaryandhighschoolsinmanycountries.Onlyrecently,inrelation
totheideaoflifelonglearningliteracypromotedbyUNESCO,hastheneedforthe
measurement of media competencies among adults, including students (Hallaq, 2016),
been evident. According to Monica Bulger (2012), a comprehensive measurement of
media literacy carried out during research is neither recommended nor possible. She
suggests a modular approach to measuring media literacy. In order to receive credible
results, the measurements should be carried out in a modular way or focus on core issues
of media literacy that are regularly measured, or by “developing a rotating portion of the
survey that focused on specic components of media literacy and could be exible to
adapt to new ndings or priorities” (p. 95). Such an approach enables the measurement
Tab l e 2 Media and information literacy framework of prociency levels, according to UNESCO.
Basic level Intermediate level Advanced level
Description Arespondenthasa
basic level of
knowledge, training,
or experience on
MIL, but signicant
improvements are
needed for eective
application.
It enables the
individual to:
Arespondenthasagood
level of knowledge and
skills acquired from
practice and training
on MIL, but there are
gaps in certain areas.
It enables the individual
to:
Arespondenthasavery
good level of knowledge
and skills acquired
from practice and
training on MIL.
It enables the individual
to:
Access Recognize his or her
information and
media (content)
need, identify and
save information
and media content
from easily located
and accessed
information
sources using basic
tools.
Specify the nature, role,
and scope of his or her
information and
media (content) need,
in order to locate and
select from various
and potentially
conicting
information sources
and providers of
information and
media content using
various tools, storing it
and applying key legal
and ethical principles.
Formulate his or her
information and
media (content) needs
into concrete
strategies and plans to
search for and access
information from
diverse sources using
relevant and where
necessary diverse tools
in a systematic,
explicit and ecient
manner, and retrieve
existing information
for further utilization.
Evaluation Select information
sources without
clear assessment
criteria, and with
limited application
and awareness of
major principles,
conditions, and
functions of media
and information
providers in society
as well as
authentication of
information and
media content.
Analyze and dierentiate
quality of and
evidence of relevant
information sources
and content,
understanding the
necessity of media and
information providers
and their implications
for society, being
unable to recognize
dierent viewpoints;
as well as store
selected information
and media content for
further application.
Within the contexts and
multiple conditions
applicable, interpret,
compare, critically
evaluate, authenticate,
and hold synthesized
information and
media content,
appreciating work of
author(s) and media
and information
providers within the
context of sustainable
development of
society, organization,
or community.
(continued overleaf )
6MEDIA LITERACY OUTCOMES,MEASUREMENT
Tab l e 2 (Continued)
Basic level Intermediate level Advanced level
Creation Organize and save
retrieved
information
without substantive
synthesis using
basic tools and
distribute without
critical appraisal or
ethical and legal
considerations for
limited application.
Create, produce, and
communicate new
information and
media content in
new formats using
appriopriate
channels and tools
for well-dened
applications as well
as engaging in a
dialogue with
others with limited
awareness of
ethical and legal
implications.
Combine information
and media content
for creation and
production of new
knowledge
considering
sociocultural
aspects of the target
audiences, and
then communicate
and distribute in
various appropriate
formats and tools
for multiple
applications in a
participatory, legal,
ethical, and
ecient manner, as
well as monitor
inuence and
impact made.
Source: UNESCO (2013, p. 60).
of another dimension of media literacy within the series of the planned research. A
drawback of the modular approach is, however, the stretching over a long period of
time of the measurement and its related high costs.
Within the past decade, interest in the measurement of media literacy outcomes
has been gradually increasing, the result of a discussion about measurement among
the researchers involved in this problematic. e issues of media literacy measure-
ment were present during the panels at the First and Second European Media and
Information Literacy Forum (Paris, 2012, and Riga, 2016). So far, only general
declarations claiming the relevance of media literacy measurement in relation to
various groups of recipients from the point of view of designing media literacy policy
have been dened. Neither recommendations nor guidance in this area have yet been
presented.
Oneofthersttoolsdesignedforthemeasurementofmediacompetencieswas
a qualitative research questionnaire, created by Australians Robyn Quin and Barrie
McMahon (1993). e questionnaire was composed of two tests: the Media Language
Test, which examined skill in analyzing three advertising messages, and the Media Nar-
rative Test, which examined analysis of an introductory segment from a television sit-
uation comedy. e questionnaire was carried out on a representative group of 1425
15-year-old students in schools in Western Australia. e students were provided with
a 12-minute lm excerpt which was shown again aer they had seen the questions. Both
tests covered to some degree language, narrative, production/circulation, audience, and
MEDIA LITERACY OUTCOMES,MEASUREMENT 7
values, with organizers varying the emphasis in each test. A paper-and-pencil tool was
used to examine students’ understanding of the media, as a part of the schools’ English
language curriculum.
Renee Hobbs and Richard Frost (2003) used similar procedures and instruments
tothoseusedbyQuinandMcMahontoexamineskillinanalyzingthreemedia
messages—a print news magazine article, an audio news commentary, and television
news—among American teenagers. e questionnaire was designed to examine the
following critical thinking skills: reading comprehension, listening comprehension,
viewing comprehension, writing skills, identication of construction techniques,
identication of a point of view, identication of omitted information, compari-
son/contrast, identication of purpose, and target audience. e survey lasted for a
total of 90 minutes. e students, having acquainted themselves with each media text,
responded to questions about the texts, answering in writing or choosing an answer
from a checklist, for example, “Write a sentence or two to describe the main idea of
this broadcast. Use the who,what,where,when,andhow structuretoexplainthemain
ideas” (Hobbs & Frost, 2003, p. 355). Researchers created a coding protocol by rst
identifying the range of possible written responses to each item. e coding protocol
included precise instructions, addressed to the evaluators assessing the coders, on how
to give points for the particular answers.
Similarly, Arke and Primack (2009) designed a tool to measure media literacy, deriv-
ing the inspiration from the works of Quin and McMahon and Hobbs and Frost. How-
ever, they did not focus, as the previous researchers did, on one skill, but on ve of them:
recall, purpose, viewpoint, technique, and evaluation. e media literacy scale designed
by them consisted of seven measures, and each of them included an item in the form
of an open question, for example, “Explain the purpose of the message,” “Identify the
senderofthemessage,”or“Whatdoestheinformationsuggest?”(p.57).e“recall”
score was based on responses to each of seven recall items. e open-ended responses
were assessed on the basis of the objective criteria and given a score from 0 to 5. e
respondents performed the same tasks in regard to three dierent media: radio, tele-
vision, and press. e tool designed by Arke and Primack to measure media literacy,
although submitted to the standardization process by the researchers, includes certain
limitations. As media literacy is such a composed construct it is measured by only seven
items. e particular items are slightly dierentiated and de facto relate to one compe-
tency area, which is critical content analysis. is, in turn, is conrmed in a signicant
correlation between composite media literacy score and composite critical thinking as
measured by the California Critical inking Skills Tests.
A standardized instrument designed to measure media literacy regarding nonc-
tion television programs and dened from the perspective of the link between media
and democracy for 11- to 18-year-old students was created by Judith E. Rosenbaum
(2007). e nal version of the questionnaire was composed of two scales measuring
the understanding of media production and the inuence of the media on its users
and included 67 questions in two parts. e rst parts were presented as statements
about television accompanied by the four-point Likert scale from “unlikely” to “likely”
(e.g., “Sometimes, documentaries use actors,” “When newsreaders read the news, no
other TV station employees are in the studio,” or “Television news can scare people”).
8MEDIA LITERACY OUTCOMES,MEASUREMENT
e second parts were the so-called action-questions, which asked respondents to do
something other than checking an answer scale (e.g., “Can you name four programs
or channels which only show real events” or “Below you will see sets of two pictures
taken from a television news program. ese pictures are also called shots. In television
(and lm as well), several techniques are used to create these shots. Examples of such
techniques are special eects, props, and costumes. e pictures in each set look very
similar.However,ineachsetonetechniqueisapplieddierently.Lookateachset,and,
in the space provided, below the two pictures, write down how each technique is applied
dierently”). In the case of this survey, the heterogeneity of the group of respondents is
the cause of some doubts. e application of the same survey to the students of middle
and high schools means a big limitation in task selecting.
Another tool, known as the Media Literacy Self-Assessment Scale (MLSS), designed
to measure media literacy among primary school students, was created by Taiwanese
researchers (Chang et al., 2011). e questionnaire was composed of two subscales:
learning with media (LWM) and media communication and ethics (MCE). e total
number of gures in the nal version of the questionnaire was nine; each of them was
evaluated by one respondent with the use of a Likert scale of 0–5, where 0 stood for
“strongly disagree” and 5 “strongly agree.” e respondent provided answers to such
questions as, for example, “I understand the content that is conveyed by the media,” “I
discuss the displayed contents of media with others,” or “I am familiar with the opera-
tional functions of media equipment that is used to broadcast learning content.” Despite
the fact that the scale is equipped with the acceptable psychometric indicators in terms
of credibility, a small number of the indicators (only nine) do not allow reliable results
in the area of respondents’ skills for such a complex concept as media literacy.
An extended tool to measure new media competencies, with the use of a
self-assessment scale, was designed by Ioana Literat from Teachers College Columbia
University (2014). e researcher created her own scale based on 12 new media
competencies distinguished by Henry Jenkins et al. (2006). In this way, 12 subscales
were established (equivalent to particular competencies: play, performance, simulation,
appropriation, multitasking, distributed cognition, collective intelligence, judgment,
transmedia navigation, networking, negotiation, visualization). Each of them includes
ve statements (60 in total), equipped with good psychometric features. A respondent
assesses each statement by using the Likert scale of 0–5, where 0 stands for “strongly
disagree” and 5 for “strongly agree.” Respondents must apply the scale to statements
such as “I appreciate simulation games and activities like Second Life, SimCity, e
Sims, FIFA, Tiger Woods, PGA Tour, etc.” (simulation subscale), “When I work on my
computer, I like to have dierent applications open” (multitasking subscale), or “When
I can’t solve a problem or nd a piece of information by myself, I use the Internet
or social media to connect with others and nd what I am looking for” (collective
intelligence subscale). However, the tool designed by Ioana Literat reveals some
drawbacks. First, the researcher must conduct the measurement of the competencies
that are of a relational and participatory character in a declarative way, whereby the
measurements are made by observation of the real behaviors of the respondents or by
assessment of their way of doing an activity. Second, she presents among the statements
some that do not necessarily refer to new media but are strongly associated with social
MEDIA LITERACY OUTCOMES,MEASUREMENT 9
or cognitive competencies that an individual reveals in oine situations (e.g., “When I
am faced with a problem, I usually try out a few dierent ways of solving it before I give
up,” “I feel I understand things better when I can think of them visually,” or “I enjoy
working with others on projects or assignments”). Although media literacy is complex,
one cannot make an assumption that the functioning of an individual in oine and
online environments will look the same.
Tom Hallaq (Hallaq, 2016, p. 66) designed, in turn, Digital Media Literacy Assess-
ment (DOMLA), which enables a holistic measurement of media literacy. Based on the
literature, the researcher distinguished ve dimensions of media literacy: media aware-
ness (MAw), media access (MAc), ethical awareness (EA), media evaluation (ME), and
media production (MP). A set of six functions, including commerce and nance, cre-
ative expression, education, entertainment, information, and social interaction, was
also distinguished and tailored with each construct. e aim was to specify particular-
ities concerning a user’s interaction with online media within a consistent framework.
e role of these functions was to contribute to the development of survey questions and
to ensure that for each construct a consistent number and scope of questions is possible.
As a result of a 12-scale procedure—which included validating of constructs and func-
tions through subject matter experts (more than 120 subject matter experts), instru-
ment validation, calculating reliability, and revising the instrument based on reliability
testing, among others—the nal version of a questionnaire for quantitative measure-
ment of digital online media literacy was identied. A paper-and-pencil questionnaire
hasaself-assessmentcharacterandincludes50statementsthatareassessedbyarespon-
dent using the six-degree Likert scale, where 1 stands for “strongly disagree” and 6
for “strongly agree.” Some of the statements presented in the questionnaire raise some
doubts. For example, “I get most of my information from the Internet” seems to be too
obvious today, because social research reveals that the Internet (especially social media)
is a basic source of information, especially among the millennium generation. Similar
statements, such as “I usually spend 12 hours or more per week on the Internet—outside
of school or work” or “I am condent in my ability to use the Internet for shopping,”
do not say much about the media literacy of the respondents.
A vast majority of current tools for measuring media literacy outcomes have a
self-assessment character. However, this method does not seem to be equally suitable
for ascertaining competencies of a performance nature. Competencies of a perfor-
mance nature require validation through the action and involvement of a respondent.
Oneofthedisadvantagesofself-assessmentquestionnairesistheso-calledbias
phenomenon among the respondents (the answers are congruent with the researcher’s
expectations). However, the popularity of this kind of questionnaire can be explained
by the simplicity of their preparation and use, although in reality designing an accurate
(meaning how precisely it measures what is supposed to be measured) and reliable
(meaning how consistent a measure of a particular element is over a period of time,
and between dierent participants) tool is time-consuming and costly.
Much more indicated methods of measuring media literacy, due to a dominant per-
formance aspect, are task tests: situation (simulation) and performance ones. In simula-
tion tests, a respondent species tasks in the way he/she does them in real-life situations,
for example saving useful website addresses in the browser memory (adding them to a
10 MEDIA LITERACY OUTCOMES,MEASUREMENT
tab) or publishing a post on a public Internet forum related to an issue under discussion.
Tasks of this kind are made in the applications that simulate the natural environment
of the respondent, which increases the probability of them acting in the same way as in
real conditions.
Core principles and conditions of measurement
One of the main questions about the measurement of media literacy outcomes is
whether measurement is needed. According to Judith Rosenbaum (2007), the benets
of the measurement conducted can be double: rst, the results provided about people’s
levels of media knowledge and media competencies have an impact on their media
prociency. Second, the information obtained in this way can be of a diagnostic
character and therefore particular educational activities, adjusted to the respondents’
needs, will be better prepared and implemented.
Quality measurement of media literacy outcomes is an extremely complex and di-
culttask.Itrequirestime,teamwork,andappropriateplanningofthewholemeasure-
ment process. Only such an approach guarantees the design of a tool that will measure
the real level of media competencies. A multistage procedure of ascertaining the relia-
bilityandaccuracyofthetoolitselfmustalsoberemembered.
When measuring media literacy outcomes, it is worth keeping in mind that the
designed tool should not measure the competencies in a holistic way. It is not possible
to create a universal tool that measures all competencies and addresses various groups
of respondents. Media literacy is dependent on sociocultural and economic context, so
the tool designed to measure it must take this into account. is presents a considerable
limitation to making universal measurement tools (Bulger, 2012; Hallaq, 2016; Schilder
et al., 2016). Besides, the concept of media literacy is too complex (not least because of
its multidimensional nature), so only particular dimensions—not all of them—should
be submitted for measurement. It can be done by applying joint methods and
tools—not all dimensions must be examined by tasks tests; some of them can be also
examined by knowledge tests. As a consequence, several groups of various tools should
be created to take into account the specics of each examined group in the areas of
interest of the researchers. One must also be aware that some aspects of media literacy
arediculttoassessbecausetheyarenotrevealedatthemomentofmeasurement.
erefore, sometimes more time is required in order to obtain a valid assessment.
Following international PISA (Programme for International Student Assessment),
ePIRLS (online Progress in International Reading Literacy Study), and ICILS (Inter-
national Computer and Information Literacy Study) research, it is worth considering
the creation of an international scientic network aimed at elaborating the principles of
measuringmedialiteracyoutcomes,and,inthefuture,thecreationofastandardized
andculturallyadaptedtoolenablingthemeasurementoftheselectedcomponentsof
media literacy.
SEE ALSO: Digital Literacy; European Perspectives on Media Literacy; Media
Competence
MEDIA LITERACY OUTCOMES,MEASUREMENT 11
References
Arke, E.T., & Primack, B.A. (2009). Quantifying media literacy: Development, reliabil-
ity, and validity of a new measure. Educational Media International, 46(1), 53–65. doi:
10.1080/09523980902780958
Buckingham, D. (2005). e media literacy of children and young people: Review of the research
literature on behalf of Ofcom. London, England: Centre for the Study of Children, University
of London. Retrieved from http://discovery.ucl.ac.uk/id/eprint/10000145
Bulger, M. (2012). Measuring media literacy in a national context: Challenges of denition,
method and implementation. Media Studies, 3(6), 83–104. Retrieved from http://hrcak.srce.
hr/96376
Bulger, M., & Livingstone S. (2013). Media literacy research and policy in Europe. A review
of recent, current and planned activities. Retrieved from http://www.lse.ac.uk/media@lse/
documents/MPP/COST-Media-literacy-research-and-policy- in-Europe-nal.pdf
Calvani, A., Cartelli, A., Fini, A., & Ranieri, M. (2008). Models and instruments for assessing
digital competence at school. Journal of e-Learning and Knowledge Society, 4(3), 183–193. doi:
10.20368/1971-8829/288
Celot, P. (Ed.). (2009). Study on assessment criteria for media literacy levels. Final report.Retrieved
from http://ec.europa.eu/assets/eac/culture/library/studies/literacy-criteria-report_en.pdf
Chang, C.-S., Liu, E.-Z.F., Lee, C.-Y., Chen, N.-S., Hu, D.-C., & Lin, C.-H. (2011). Developing
and validating a media literacy self-evaluation scale (MLLS) for elementary school students.
e Turkish Online Journal of Educational Technology, 10(2), 63–71. Retrieved from http://
les.eric.ed.gov/fulltext/EJ932226.pdf
Hallaq, T. (2016). Evaluating online media literacy in higher education: Validity and reliability of
the Digital Online Media Literacy Assessment (DOMLA). Journal of Media Literacy Education,
8(1), 62–84. Retrieved from http://digitalcommons.uri.edu/jmle/vol8/iss1/5
Hobbs, R. (2010). Digitalandmedialiteracy:Aplanofaction. Washington, DC: e Aspen Insti-
tute.
Hobbs, R., & Frost, R. (2003). Measuring the acquisition of media literacy skills. Reading Research
Quarterly, 38(3), 330–355. doi: 10.1598/RRQ.38.3.2
Jenkins, H., Clinton, C., Purushotma, R., Robison, A.J., & Weigel, M. (2006). Confronting the
challenges of a participatory culture: Media education for the 21st century [White paper]. e
John D. and Catherine T. MacArthur Foundation. Retrieved from https://www.macfound.org/
media/article_pdfs/JENKINS_WHITE_PAPER.PDF
Literat, I. (2014). Measuring new media literacies: Towards the development of a comprehen-
sive assessment tool. Journal of Media Literacy Education, 5(1), 15–27. Retrieved from http://
digitalcommons.uri.edu/jmle/vol6/iss1/2
Martens, H. (2010). Evaluating media literacy education: Concepts, theories and future direc-
tions. Journal of Media Literacy Education, 2(1), 1–22. Retrieved from https://digitalcommons.
uri.edu/jmle/vol2/iss1/1
Potter, J.W. (2014). Media literacy (7thed.).ousandOaks,CA:SAGE.
Quin, R., & McMahon, B. (1993). Evaluating standards in media education. Canadian Journal of
Educational Communication, 22(1), 15–25. doi: 10.21432/T2F31F
Rosenbaum, J. (2007). Measuring media literacy: Youngsters, television, and democracy.Retrieved
from http://repository.ubn.ru.nl/handle/2066/56353
Schilder, E.A., Lockee, B.B., & Saxon, P.D. (2016). e challenges of assessing media liter-
acy education. Journal of Media Literacy Education, 8(1), 32–48. Retrieved from http://
digitalcommons.uri.edu/jmle/vol8/iss1/3
UNESCO. (2013). Global media and information literacy assessment framework: Country
readiness and competencies. Retrieved from http://unesdoc.unesco.org/images/0022/002246/
224655e.pdf
12 MEDIA LITERACY OUTCOMES,MEASUREMENT
Further reading
Burn, A. (2009). Process and outcomes. What to evaluate and how? In P. Verniers (Ed.), Media
literacy in Europe: Controversies, challenges, and perspectives (pp. 61–69). Brussels, Belgium:
EuroMeduc. Retrieved from http://www.euromeduc.eu/IMG/pdf/Euromeduc_ENG.pdf
Calvani, A., Fini, A., Ranieri, M., & Picci, P. (2012). Are young generations in secondary school
digitallycompetent?AstudyonItalianteenagers.Computers & Education, 58(2), 797–807.
doi: 10.1016/j.compedu.2011.10.004
Maksl, A., Ashley, S., & Cra, S. (2015). Measuring news media literacy. e Journal of Media
Literacy Education, 6(3), 29–45. Retrieved from http://digitalcommons.uri.edu/jmle/vol6/
iss3/3
Parola, A., & Ranieri M. (2010). Research into media education. Issues, models, and tools.InA.
Parola & M. Ranieri (Eds.), Media education in action. Research study in six European countries
(pp. 63–99). Florence, Italy: Firenze University Press.
Trinchero, R. (2010). Developing and assessing media competence. In A. Parola & M. Ranieri
(Eds.), Media education in action: Research study in six European countries (pp. 37–54). Flo-
rence, Italy: Firenze University Press.
Vraga, E., Tully, M., Kotcher, J.E., Smithson, A.-B., & Broeckelman-Post, M. (2015). A
multi-dimensional approach to measuring news media literacy. Journal of Media Literacy Edu-
cation, 7(3), 41–53. Retrieved from http://digitalcommons.uri.edu/jmle/vol7/iss3/4/
Grzegorz Ptaszek is an assistant professor at the Faculty of Humanities at AGH Uni-
versity of Science and Technology in Krakow (Poland). Since 2016 he has been the Pres-
identofthePolishAssociationofMediaLiteracy.HeisalsoanexpertoftheMinistryof
National Education and the Central Examination Board. In 2016, Ptaszek prepared the
report for the National Broadcasting Council titled Monitoring of Radio and TV Pro-
grams Involving Media Literacy in Polskie Radio and Telewizja Polska. e Report aer
Broadcasting. His most recent work is a book (in Polish) titled Media Education 3.0.
Critical Understanding of Digital Media in an Era of Big Data and Algorithms.