Content uploaded by Karma El-Hasan
Author content
All content in this area was uploaded by Karma El-Hasan on Oct 01, 2018
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=rett20
Education 3-13
International Journal of Primary, Elementary and Early Years Education
ISSN: 0300-4279 (Print) 1475-7575 (Online) Journal homepage: http://www.tandfonline.com/loi/rett20
The alignment between the fourth grade Lebanese
science curriculum and classroom assessments
Karma El Hassan & Mariam Baassiri
To cite this article: Karma El Hassan & Mariam Baassiri (2018): The alignment between the
fourth grade Lebanese science curriculum and classroom assessments, Education 3-13, DOI:
10.1080/03004279.2018.1515236
To link to this article: https://doi.org/10.1080/03004279.2018.1515236
Published online: 27 Aug 2018.
Submit your article to this journal
Article views: 7
View Crossmark data
The alignment between the fourth grade Lebanese science
curriculum and classroom assessments
Karma El Hassan and Mariam Baassiri
Department of Education, American University of Beirut (AUB), Beirut, Lebanon
ABSTRACT
The purpose of this study was to investigate the alignment between the
fourth-grade summative classroom assessments and the ‘Matter and
Energy’unit in the Lebanese science curriculum using: content and
cognitive levels. Summative assessments were collected from a sample
of 17 schools. The alignment between the classroom assessments and
the Lebanese science curriculum were examined using three
instruments: Curriculum Coding Sheet, Items Coding Sheet, and Item
Analysis Sheet. The curricular learning objectives were used for
matching the test items to the learning objectives in terms of content.
Findings showed moderate content alignment; further analysis revealed
that public schools had higher content alignment, and they covered a
wider range of topics. In terms of cognitive level, there was a low
alignment between the assessments collected and the curriculum;
however, there were items that had higher cognitive level.
Recommendations for curriculum developers and future research are
discussed in light of these findings.
ARTICLE HISTORY
Received 2 February 2018
Accepted 17 August 2018
KEYWORDS
Alignment; classroom
assessment; content validity;
science curriculum;
elementary
Educational assessment is an integral part of the teaching and learning process, and as it is the basis
for important educational decisions, it needs to be valid, reliable, and fair. According to the American
Federation of Teachers, National Council on Measurement in Education, & National Education Associ-
ation (AFT, NCME, & NEA 1990) the ‘Standards for Teacher Competence in Educational Assessment of
Students’state that teachers need to have basic competencies regarding assessments and particu-
larly being proficient in developing assessments that are suitable for instructional decisions. Focusing
on this ‘Standard’, there are several steps in developing assessments starting with developing specifi-
cations to ensure accurate representation of the domain being assessed to preparing relevant assess-
ment items or tasks that match the intended objectives, and this process of matching the items
closely to the intended objective is part of the curriculum-assessment alignment.
To validate the content of summative classroom assessments, studies are done to examine the
alignment between test items, test specifications (content and cognitive skills), and instruction
(Miller, Linn, and Gronlund 2011). If these components are in agreement, it means that students
are taught what is specified in the curriculum and that tests are assessing what is taught. A basic
requirement for the content validity of results of classroom assessments is that the test items and
the curriculum’s learning objectives should be aligned (Nitko and Brookhart 2011). Alignment is
also needed to produce accurate measures of student achievement of standards, and to make fair
decisions.
To assure alignment of assessments to the curricular learning objectives, the literature cites three
traditional methods for alignment: (a) sequential development, (b) expert review, and (c) document
© 2018 ASPE
CONTACT Karma El Hassan kelhasan@aub.edu.lb
EDUCATION 3-13
https://doi.org/10.1080/03004279.2018.1515236
analysis (Webb 1997). Researchers have developed more sophisticated models of analysis based on
these methods and the three commonly used models are the Webb, the Achieve, and the Surveys of
Enacted Curriculum (SEC) models. The Webb and Achieve models focus on comparing the content of
the learning objectives to the assessment, while the SEC model includes the instructional dimension
as well. What differentiates these three models is the level of analysis used for evaluation of align-
ment. A preferred model for determining the degree of alignment has not yet been agreed upon
since different models have different foci, and each has its strengths and limitations in specific situ-
ations (Martone and Sireci 2009). Nonetheless, all reviewed methodologies start with the basic evalu-
ation of the alignment between curriculum and assessment items based on two criteria: content and
cognitive levels (Martone and Sireci 2009).
In this study, the Lebanese science curriculum is used to investigate whether classroom assess-
ments used in Lebanese schools are aligned to the curricular learning objectives. The fourth grade
Lebanese science curriculum includes: content, learning objectives, activities and materials, and
remarks, and is divided into 5 themes: (a) Plants and their habitat; (b) Animals and their habitat;
(c) Man and his health; (d) Matter and Energy; (e) Earth and the Universe.
The curriculum is set by the Ministry of Education and Higher Education; however, most of the
textbooks schools follow are commercially produced to meet the curriculum specifications
(Marlow-Ferguson 2002). Public and private schools were free to choose their textbooks up until
1970s where the government adopted the books created by the Center for Educational Research
& Development (CERD) and therefore only public schools have to use them while private schools
can still choose their own textbooks in all subjects except the civics area where they are required
to use the CERD’s textbooks (Marlow-Ferguson 2002).
While searching the literature for alignment studies, there were only studies done in the USA, and
they were studies at the national level comparing national standards to national examinations at
middle and secondary level in chemistry and mathematics (Resnick et al. 2004; Martone and Sireci
2009; Webb 1999,2002). No alignment studies using common alignment approaches at classroom
level were found. With respect to Lebanon, there were only three alignment studies done (Hajo
2010; Osta 2007; Sleiman 2012).
The main goal of the present study is to investigate the alignment between the Lebanese science
curriculum and classroom assessments in terms of content and cognitive levels in schools that follow
the Lebanese curriculum at the elementary level since it is a basic requirement for the content validity
of results. This study tackles the scarcity of information available about classroom assessments in
Lebanon and their alignment with the curriculum. It will provide other education researchers with
a method to investigate the alignment of their assessments as well. This study will also bridge the
gap in the literature regarding alignment using classroom assessments since the studies found
used state or national-level assessments and at middle and secondary school levels (Martone and
Sireci 2009; Osta 2007; Resnick et al. 2004; Hajo 2010; Sleiman 2012; Webb 1999,2002). Furthermore,
the findings of the study can help both teachers and curriculum developers. For teachers, the results
of the study will show content and cognitive demands that are currently emphasised in the classroom
assessments, and accordingly will identify the gaps that need to be addressed. For curriculum devel-
opers, the results of this study might inform curricular change as they reveal the learning objectives
that need to be revisited or rewritten to facilitate better curriculum-assessment alignment.
The study aims to answer the following research questions:
(1) To what extent do classroom assessments measure the content of the fourth-grade Lebanese
science curriculum?
(2) How are the assessment items in classroom assessments distributed among the content topics in
the curriculum?, and
(3) To what extent are the cognitive levels of items in classroom assessments aligned with the cog-
nitive levels required by the learning objectives, as stated in the fourth-grade Lebanese science
curriculum?
2K. EL HASSAN AND M. BAASSIRI
Methodology
Research design
The research design for this study is descriptive and involves the analysis of artifacts consisting of the
fourth-grade Lebanese science curriculum and a sample of summative science classroom assess-
ments. The analysis was done by the researcher and then reviewed by two experts in the science edu-
cation field.
The artifacts were analyzed both qualitatively and quantitatively. The curriculum and assess-
ments were analyzed in terms of their content, objectives that are covered, and the cognitive
levels they measure. The study consists of three phases: In Phase I, the curricular learning objec-
tives were coded according to their content and were assigned a cognitive level in consultation
with the reviewers, while in Phase II, the assessments were coded and each item was matched to
a content topic, a learning objective, and assigned a cognitive level, and in Phase III the alignment
between the assessments and curriculum was analyzed and reported; this is further elaborated
next.
Phase I
In Phase I of the study, coding of the unit ‘Matter and Energy’from the fourth-grade science curricu-
lum was completed. The theme, ‘Matter and Energy’was chosen because it is the most comprehen-
sive one comprising 35% of the curriculum’s learning objectives, and constituting an estimated 33%
of the total allocated science time.
Instrument used
The study utilises three main data collection instruments; two were developed by the researcher,
while the third was adapted from Sleiman (2012). The first instrument, the Curriculum Coding
Sheet (CCS), was used for coding learning objectives and for assigning them cognitive levels. The
coding of the objectives is done based on content topic, order in the curriculum, and its cognitive
level based on Bloom’s original taxonomy. This taxonomy was chosen instead of the revised taxon-
omy because the curriculum was published three years before the revision was made in 2001. In
addition, Bloom’s taxonomy, in general, is more familiar to teachers and is commonly used when
creating classroom assessments.
Coding of curriculum
Each content topic and each learning objective were given a code, and afterward, based on
Bloom’s Taxonomy, a cognitive level was assigned to the objective. The researcher completed
this task, as she is familiar with the subject matter at the elementary level. She holds a bachelor’s
degree in elementary education with emphasis on science and mathematics. In addition, to vali-
date the results two other reviewers reviewed her work. The reviewers were experts in science
education; one of them has more than ten years of science teaching experience and a master’s
degree in education while the other reviewer is a chemistry teacher holding a bachelor’s
degree in chemistry and a master’s degree in science education. To further ensure the reliability
of results, the researcher met with the reviewers, explained the cognitive levels that were used,
then they discussed each objective until a consensus cognitive level was determined for each
learning objective.
Phase II
Phase II of the study consisted of collecting and coding of summative science classroom assessments
from the selected sample of schools.
EDUCATION 3-13 3
Sampling of schools and assessments
The population was all of the public or private schools in Beirut that have elementary classes,
follow the Lebanese science curriculum at fourth-grade, and where English is the language of
science instruction. The choice of English instead of Arabic or French is due to the investigator’s
proficiency. To select a representative sample of the population, stratified random sampling was
used. Beirut is divided into three regions and the ratio of schools between regions 1, 2, and 3 is
1:7:3. 18 schools were randomly selected (using MS Excel) within each stratum (Beirut region).
Schools in each stratum were selected based on the ratio of public to private schools. Table 1
shows the distribution of schools by region and type of school. The final sample, however,
became 17 schools because in Beirut region 1, only one school out of the six schools agreed
to participate in the study.
After selecting schools, all summative science assessments assessing the unit ‘Matter and Energy’
were requested from all Grade 4 classroom teachers for analysis. From the 17 schools, 42 assessments
were collected in total. Number of assessments collected by school ranged from 1 to 5, with five
schools providing only one assessment and one school providing five assessments.
Instrument used
The second instrument used in the study was the Items Coding Sheet (ICS). The ICS was used to
match each item of the classroom assessments to a curricular content topic and to assign it a cog-
nitive level from Bloom’s taxonomy.
Coding of assessments
Prior to the coding, the researcher along with the reviewers matched a sample of five to seven assess-
ment items in terms of learning objectives and cognitive levels. After they were done analyzing each
sampled test item, the reviewers and the researcher reconciled any discrepancy in the categorisation
and reached consensus. The researcher then continued working on the rest of the assessment items
independently, with reviewers randomly checking on work being done.
Phase III
In Phase III, the assessment items were compared to the curriculum to determine if they match
in terms of content and cognitive level. After the researcher individually coded all assessment
items from all schools, a stratified random sample of 16 items was selected from all the tests
collected and independently matched, by the researcher and reviewers, to the learning objec-
tives and cognitive levels. Items were stratified according to their cognitive level and to
content topics to ensure the sample was distributed among the topics of the unit. The
researcher and the two reviewers did the matching process separately. After the reviewers
were done analyzing each sampled item, the percentage of agreement between the researcher
and reviewers was calculated to assess inter-rater reliability. If there was no agreement, items
were discussed until reaching consensus on the appropriate content topic and cognitive level
of that learning objective.
Table 1. Distribution of schools based on Beirut regions.
Beirut
regions
Number of
public schools
Number of
private schools
Total number
of schools
Ratio public:
private
Distribution of
schools selected Total number
of schools
selected (18)Public Private
1 1 5 6 1:5 0 2 2
2 14 30 44 1:2 4 7 11
3 3 12 15 1:4 1 4 5
Total 18 47 65 5 13 18
4K. EL HASSAN AND M. BAASSIRI
Instrument used
The third instrument, the Item Analysis Sheet (IAS) was adapted from Sleiman (2012) for quantitative
analysis of the assessments. An IAS was filled for every assessment, and each test item was analyzed
as to the objective(s) it was matched to, and the cognitive levels to which it was assigned. The IAS
helped clarify how the item points were distributed across the content topics and across the cogni-
tive levels. It also indicated which parts of the curriculum were assessed and which were overlooked,
and which cognitive levels were emphasised.
After completing the IAS for each assessment, total points were added and then the data were
transformed into percentages relative to the total number of points in each assessment to unify
the basis of comparison between different assessments. For every assessment, an ICS and an IAS
were filled, and percentages of assessment items that matched the content and cognitive levels
were calculated to describe the alignment.
Results
Quantitative analysis of the alignment
The assessment items were compared to the curriculum to determine if they match in terms of
content and cognitive level. After the ICS for each assessment was completed, the number of
points assigned that successfully match the curricular content and cognitive levels was computed
as a percentage relative to the total number of points. The results were aggregated for all assess-
ments. Descriptive statistics were used; the percentages of alignment were categorised and these
were used to generate a frequency table to describe the distribution. Tables 2 and 3report percen-
tage of alignment with content and cognitive level for the whole sample and by public and private
type of school. The median was used as it is a representative of the centre of the data and is not
heavily influenced by outlying measurements as the mean is. In addition, the mean cannot be
used because the different assessments collected are not comparable since they are different in
content and length.
The results of the analysis of the 42 summative assessments collected showed that the median
percentage of content alignment was 68.5% with the values ranging from 3% to 100% alignment.
As shown in Table 2, 41% of the assessments had content alignment equal to or lower than 60%,
which indicates low alignment. Nineteen percent of the assessments were moderately aligned
between 60% and 80%, while 40% of the assessments were between 80% and 100% aligned with
the content of the unit indicating high alignment. There was a wide variability among schools and
within schools. For example, School X had two assessments that were perfectly aligned while the
third assessment had a content alignment of 20% only and that’s because it is more comprehensive
and covers more topics.
Table 2. Distribution of assessments and degree of content alignment.
% of content
alignment
For public schools For private schools For all assessments
Freq. of
assess.
% of total no. of
assess.
Freq. of
assess.
% of total no. of
assess.
Freq. of
assess.
% of total no. of
assess.
1–10 0 0 3 11 3 7
11–20 002725
21–30 0 0 4 14 4 10
31–40 172737
41–50 171425
51–60 0 0 3 11 3 7
61–70 2 14 3 11 5 12
71–80 3 21 0 0 3 7
81–90 2 14 4 14 6 14
91–100 5 36 6 21 11 26
Total 14 100 28 100 42 100
EDUCATION 3-13 5
There was an incidental finding when comparing the alignment of assessments between public
and private schools, as there were significant differences in alignment with 60% median percentage
of content alignment in private schools while in public schools it was 80.5%. As shown in Table 2, 14%
of the public schools’assessments had content alignment equal to or lower than 60%, 35% were
moderately aligned between 60% and 80%, and 50% were between 80% and 100% aligned indicat-
ing high alignment. In private schools, 54% of the assessments had content alignment equal to or
lower than 60%, 11% were moderately aligned, and 35% had high alignment.
When it comes to cognitive level alignment, the results of the analysis showed the median percen-
tage of cognitive alignment of the items was 38.5%. As shown in Table 3, 90% of the assessments had
cognitive alignment equal to or lower than 60%, which indicates low alignment, four percent were
moderately aligned between 60% and 80%, and only 5% showed between 80% and 100% alignment.
In addition, from the assessments that were low in cognitive-level alignment, there were assessments
that had higher cognitive level items and others with lower cognitive level items. The median percen-
tage of items with higher cognitive level was 26%, while the median percentage of items with lower
cognitive level items was 32.5%. Thus, 65% of items are at the same level of cognitive-level or higher,
which indicates a moderate level of agreement.
Reviewers’agreement in coding
Three raters including the researcher independently analyzed and matched a stratified random
sample of 16 items from all the tests collected, to the learning objectives and cognitive levels. The
percentage of agreement between them was calculated, and the overall percentage agreement
for the matching of learning objectives was 100% indicating perfect agreement to assigning the
items to their corresponding content topics and specific learning objectives. The overall percentage
agreement for the assignment of the cognitive level was 81% indicating a high agreement between
the raters.
Distribution of items among the content topics
The distribution of items among the content topics was examined using the Items Analysis Sheets
(IAS) that were combined into one table, which allowed the researcher to examine the content
topics that were given more emphasis and those which were overlooked.
Results of the analysis showed that three topics (3, 4 and 5 Figure below), made up the highest
percentage of the test items (64%), while other topics were either moderately or little assessed.
Examining content distribution between public and private schools shows that the public
schools appear to have a wider distribution of items that cover all topics, while private schools
appear to have a narrower distribution of items that cover fewer topics than public schools
(Figure 1).
Table 3. Distribution of assessments and degree of cognitive alignment.
% of cognitive alignment Frequency of assessments % of total number of assessments
0–10 7 17
11–20 6 14
21–30 5 12
31–40 6 14
41–50 9 21
51–60 5 12
61–70 1 2
71–80 1 2
81–90 0 0
91–100 2 5
Total 42 100
6K. EL HASSAN AND M. BAASSIRI
Distribution of items among the cognitive levels
The distribution of items across the different cognitive levels was described using percentages calcu-
lated in the IAS. The results were aggregated for all assessments. Results of the analysis showed that
almost half of the items (46%) address the cognitive level ‘Comprehension’, 32% address the cogni-
tive level ‘Knowledge’, while only 17% address the cognitive level ‘Application’and 5% address the
cognitive level ‘Analysis’. The cognitive levels ‘Synthesis’and ‘Evaluation’were barely addressed with
percentages 1% and 0% respectively.
Further analysis to see the difference in cognitive-level distribution between public and private
schools is illustrated in Figure 2 below. The public schools concentrated on three of Bloom’s cognitive
levels which were: ‘Knowledge’(51%), ‘Comprehension’(38%), and then ‘Application’(11%), while the
private schools had a wider distribution across the cognitive levels; they concentrated mainly on
‘Comprehension’(49%), then ‘Knowledge’(22%) and ‘Application’(21%), then ‘Analysis’(7%).
Bloom’s cognitive level ‘Evaluation’was not tackled by any of the schools.
Furthermore, the difference between the cognitive levels assigned to the curricular learning objec-
tives and the cognitive levels assigned to the assessment items was analyzed and the results are illus-
trated in Figure 3 below. The curriculum concentrated on Bloom’s cognitive levels of ‘Knowledge’(39%),
Figure 1. Mean distribution of assessment items among content topics (Public vs. Private Schools). 1 = Definition of Matter, 2 =
Properties of Matter, 3 = Measurement of Mass, 4 = Mixtures and Solution, 5 = Magnets, 6 = The Electric Charge, 7 = Sound & prop-
erties, 8 = Propagation of Sound, 9 = How do we Hear, 10 = Effect of noise on health.
Figure 2. Composition of the mean distribution of assessment items among Bloom’s cognitive levels between public and private
schools. 1 = Knowledge, 2 = Comprehension, 3 = Application, 4 = Analysis, 5 = Synthesis, 6 = Evaluation.
EDUCATION 3-13 7
‘Analysis’(22%), and ‘Comprehension’(17%) while the assessment items concentrated on Bloom’s cog-
nitive levels of ‘Comprehension’(46%), ‘Knowledge’(32%), and ‘Application’(17%). Cognitive levels
‘Synthesis’and ‘Evaluation’were barely addressed in both the Curriculum and assessments.
Summary of results
The results of the analysis of the 42 summative assessments collected showed that the median percen-
tage of content alignment was 68.5%, while the median percentage of cognitive-level alignment of the
items was 38.5%. This percentage goes up to 65% if we include items with higher cognitive level.
When analyzing the distribution of items across content topics, results showed that there were
topics that were highly assessed while others less so. Further analysis showed that the public
schools assessments covered wider content than private school assessments.
In addition, when analyzing the distribution of items across Bloom’s cognitive levels, results
showed that 78% of the items address levels 1 and 2 of Bloom’s taxonomy only: ‘Comprehension’
and ‘Knowledge’. The cognitive levels ‘Analysis’and ‘Synthesis’were least addressed whereas ‘Evalu-
ation’was not addressed at all. Private schools covered more cognitive levels than the public schools.
Furthermore, when comparing the distribution of cognitive level in the curriculum and in the assess-
ments, results showed that there were differences in the distribution with the curriculum concentrating
on ‘Knowledge’,‘Analysis’,and‘Comprehension’, and the assessment items on ‘Comprehension’,
‘Knowledge’,and‘Application’. The cognitive level ‘Synthesis’was least addressed (only in assessments),
whereas ‘Evaluation’was not addressed at all by either the curriculum or assessments.
Discussion
Classroom assessments are used as a basis for important educational decisions and for this their
results need to be valid, and reliable. In addition, the assessment measured should be aligned
with the curriculum’s learning objectives as it is a necessity for the validity of classroom assessment
results (Nitko and Brookhart 2011). Moreover, assessments should match the content covered in the
learning objectives, and should match the cognitive skills required by the objectives (Nitko and Broo-
khart 2011).
Extent of content alignment
The assessment items were compared to the curriculum to determine the extent of their content
alignment and the obtained results indicate moderate alignment. There was an incidental finding
Figure 3. Cognitive level assignment (Curriculum vs. Assessments). 1 = Knowledge, 2 = Comprehension, 3 = Application, 4 = Analy-
sis, 5 = Synthesis, 6 = Evaluation.
8K. EL HASSAN AND M. BAASSIRI
when comparing the alignment of assessments between public and private schools, as the median
percentage of content alignment in private schools was lower than in public schools. It is important to
mention that the assessments collected by the researcher weren’t all the same in terms of length; in
other words, some assessments covered more topics than others. Some schools handed in one end-
of-semester exam; while other schools do not follow such systems but rather do one exam at the end
of each unit they cover; thus, these will be more aligned than the other exams that cover other topics
taken during the semester. This variety in collected artifacts explains the wide range of values in
content alignment.
With respect to the incidental finding in relation to public/private significant differences in align-
ment, it was not surprising that public schools assessments had higher alignment than private, since
public schools follow the national curriculum without referring to other sources, and in addition they
use the textbooks published by CERD, which are based on the national Lebanese curriculum (Marlow-
Ferguson 2002; Osta 2007).
Distribution of assessment items among the content topics
Completing the IAS for each assessment and combining the data provided us with the distribution of
the assessment items in classroom assessments across the content topics in the curriculum. Findings
showed that the topics Mixtures and Water Solutions, Measurement of Mass, and Magnets, received
the highest percentage of test items relative to others, while least topics covered were Sound and
some of its properties, Propagation of Sound, How do we hear, and Effect of noise on our health.
When analyzing the distribution of learning objectives in the curriculum among the content
topics, there was no alignment or relationship between it and distribution of assessment items; for
example topics 1, 7, and 8 had equal number of learning objectives (12%) that cover 36% of the
unit, however in the assessments, only 15% of items covered these topics. The only exception was
the topic Mixtures and Water Solutions, as it was highly assessed and also had highest number of
learning objectives (18% of objectives) compared to the rest. Furthermore, the number of curricular
learning objectives under each topic does not imply size of the content. For this reason, it would be
useful to also analyze the amount of material in the books to see which topics cover more material.
When comparing the difference in content topic distribution between public and private schools,
the public schools’assessments appear to cover a wider section of the content and all topics, while
private schools appear to cover fewer topics than public schools. This finding further confirms the
higher alignment between assessments and curriculum in public schools. According to Osta (2007):
In Lebanon, a national curriculum is in effect, which is binding to both public and private schools. While public
schools implement only the national curriculum and textbooks, private schools may implement more than one
program and may use different series of textbooks but are bound to teach also the national curriculum. (175)
This is confirmed in the study as some private schools use the national curriculum as well as other
foreign curricula, and they have the freedom to choose what topics to cover and what to overlook.
For that reason, there were some topics (topics 7, 8, 9, and 10) that were covered in the curriculum
but were not addressed in the assessments provided.
Even in public schools at this level, teachers are capable of skipping topics or not giving them due
importance especially if they are running out of instructional time. The elementary level is not con-
sidered high stakes such as in grades nine and twelve when national examinations take place. For
example in Sleiman’s study (2012), results revealed that both the secondary-level mathematics
official exams and the model tests neglect some topics from the curriculum. Also, results from
Resnick and his colleagues’study (2004) showed that tests in most states were not well-balanced,
focusing on less important standards and objectives. In Osta’s study (2007), findings showed that
the Lebanese national examinations at ninth grade level in math were stable in terms of content cov-
erage and abilities addressed and that they had a low level of mobility from one test to another; this
stability makes the topics covered in class shrink gradually focusing only on the content that is valued
EDUCATION 3-13 9
by the exams. However, since there are no national examinations at the elementary level, there are no
external forces or directives that guide teachers to focus on some topics more than others; this is
revealed in this study by the wide variety of content topic coverage from one school to another
and between public and private schools.
An important point is that even though some topics were barely covered by private schools, yet it
does not mean that they are not covered at all; some schools teach these topics at other grade levels.
It is important to mention that some schools take the Lebanese curriculum and they distribute the
learning objectives based on their students’level; they teach a specific topic at the grade that
suits their students, so a topic could be taught at grade 4 in one school while it could be taught
at grade 3 in another school. When the researcher was collecting science assessments from
schools, many schools said that they do not cover some curricular topics at the fourth-grade level.
When they were asked informally about the reason why these topics are not covered, their reply
was that their aim is to fulfil the objectives at the end of the cycle rather than the grade.
Extent of cognitive-level alignment
The comparison of assessment items to the curriculum to determine extent of cognitive alignment
revealed relatively low alignment, however, there were assessments that were low in cognitive-
level alignment but they were at higher cognitive-level. If these are combined we get around 65%
of items that are at the same level of cognitive-level or higher which indicates a moderate level of
agreement. This finding is in agreement with Hajo (2010) who states if the curriculum emphasises
low cognitive-level thinking then there is a high probability that the cognitive-level of the assessment
items can be at or above the cognitive level of the objectives they were matched to.
Further analysis was done regarding the cognitive levels of the items across Bloom’s cognitive
levels. Results showed that most of the items addressed the lower cognitive levels 1 and 2. Similar
findings are reported by Sleiman (2012) that both the official exams and the model tests at second-
ary-level focused on the cognitive domain ‘knowing’and ‘applying’while neglecting ‘reasoning’. Also
in Webb’s study (1999) the assessment items generally targeted low cognitive-level.
In addition, when analyzing the cognitive levels assigned to the curricular learning objectives and
those assigned to the assessment, it was found that the curriculum had a high percentage for ‘Analy-
sis’(22%), but only 5% of the assessment items were on that level, and these assessments came from
private schools only. One of the explanations could be that when these learning objectives are taught
in class they become low cognitive demanding. For example the learning objective, ‘infers that like
charges repel each other and unlike charges attract each other’was assigned to Bloom’s level ‘Ana-
lyzing’; however during instruction once students infer that through an experiment or demonstration,
it becomes a fact that they know; hence if they were asked about it in an assessment (for example a
fill in the blank question: Like charges ____ and unlike charges____) then it might become Bloom
level one or two (depending on the question) as they are not inferring anymore since they’ve
done it and now they are recalling it. The inferring cognitive procedure, which is assumed in the cur-
riculum, is done while learning and not always while assessing. This was shown abundantly in the
assessments where the questions were based on recall and the learning objective they were assigned
to was based on analyzing and inferring.
Findings also showed that both the curriculum and the assessments devalue Bloom’s higher cog-
nitive levels of ‘Synthesis’and ‘Evaluation’. It is not sure if the assessments are to be blamed for focus-
ing on low-level cognition since the curriculum, which they are supposed to follow, does not
emphasie their use. This finding confirms the results obtained by Marlow-Ferguson (2002) who
states ‘memorisation of facts and events is greatly emphasised in Lebanese schools’(784). Also,
this finding is somewhat similar to the results Sleiman (2012) obtained which concluded that the
official exams focus on low-level cognitive domains while neglecting the higher levels.
In addition, according to the National Research Council (NRC) (2000)even though understanding
has more value in science than knowledge, ‘it is a mistake to think that all instruction or assessments
10 K. EL HASSAN AND M. BAASSIRI
should aim for the higher level of outcome,’(78) and the reason is if a student does not succeed at a
complex task, it would be hard to tell whether the failure was due to lack of specific skills or lack of
knowledge needed unless these were examined (NRC 2000). Therefore, it is recommended to have a
certain extent of low-level cognitive demand, but at the same time also to value the higher cognitive
levels in curriculum design and this is the case in the Lebanese curriculum. Furthermore, having
higher cognitive-level items will only be positive if the assessments are aligned to the curriculum
and with classroom instruction and teaching. So even if the curriculum focuses on low cognitive
levels, if the classroom instruction targeted higher cognitive thinking then assessment items with
high cognitive level will be acceptable, otherwise it would be a disadvantage to students.
The study conducted certainly has limitations. This study is specific to elementary level and the
results may not be generalisable to other levels. In addition, the results of this study are limited to
the ‘Matter and Energy’unit and cannot be generalised to the entire fourth-grade science curriculum.
Another limitation is that some schools handed in several assessments, while other schools handed in
only one assessment; this may or may not reflect practice.
There have been few attempts in Lebanon to study the alignment between the national assess-
ments and curriculum; however, there haven’t been studies on the alignment between the national
curriculum and classroom assessments at the elementary level. This study addressed this gap in
research and the results showed moderate alignment between the assessments collected and the
Lebanese curriculum in terms of content and cognitive level.
Content alignment was not as high among private schools and they are the majority, as they do
not fully follow the Lebanese curriculum as it is. Although they are bound to teach the national cur-
riculum but they follow more than one programme and might use several different series of text-
books (Osta 2007) hence, private schools are selective in the topics they want to teach. This was
demonstrated by the higher content alignment of public schools versus private schools because
they follow the Lebanese curriculum as it is, and it is also demonstrated by the narrower content
topics covered by the private schools versus public schools.
In terms of cognitive level, the alignment was in general moderate, as previously explained.
Additional analysis revealed the discrepancy in degree of emphasis in the curriculum and in the
assessments. Similar discrepancies in emphasis were noted between public and private schools.
According to Fulmer (2011), the items used in tests represent only a sample of the content
domain and cognitive-level specified in the curriculum, and therefore one cannot expect
perfect alignment. In addition, discrepancies between what the curriculum and assessments
emphasise may be acceptable if the test encourages higher-order thinking than what the objec-
tives describe (Fulmer 2011); however, it will only be acceptable if the assessments are aligned
with classroom instruction and teaching otherwise it would be a disadvantage to students.
Furthermore, Fulmer (2011) states that if researchers and policymakers determine that a set of
standards does not represent student outcomes adequately then a low alignment can possibly
be acceptable and sometimes preferable. Therefore, it is important for any curriculum, both
locally and internationally, to adequately present student learning outcomes in terms of
content and cognitive-level for better alignment or else it would be invalid and unfair to both
students and schools to judge their achievement of expectations based on a poorly aligned
system of assessments (La Marca 2001).
It is recommended that any curriculum be written in a way to make it easier for teachers to base
their assessments on it. In addition, it is urged that learning objectives be written as an end product
rather than an instructional product. For example, in the Lebanese curriculum, the learning objective
‘explores the kinds of mixtures: homogenous, heterogeneous’could be rewritten into ‘distinguish
between homogeneous and heterogeneous mixtures’. In addition, in the curriculum there are learn-
ing objectives, such as ‘infers that sound travels only in matter’that were assigned to Bloom’s level
‘Analyzing’; however, during instruction once students infer that then it becomes a fact that they
know, hence if they were asked about it in the assessment then it becomes Bloom level one or
two (depending on the question) as they are not inferring anymore. The inferring cognitive
EDUCATION 3-13 11
procedure is done while learning, and not while assessing, and accordingly it is important that learn-
ing objectives be written as end products of learning.
In addition, it is recommended that learning objectives for some topics need to be expanded as
the material is too large to be summarised in 2–3 learning objectives that might not even be measur-
able. For some content topics, there was important material that was frequently assessed yet one
could not find a direct learning objective to link it to. Therefore, it is recommended to expand the
number of learning objectives in order to help teachers in preparing relevant assessment items
that are aligned to the curriculum.
This study focused on fourth-grade summative assessments and fourth-grade ‘Matter and Energy’
unit in the Lebanese curriculum. It is recommended that future research examines assessments
addressing the entire science curriculum incorporating the whole cycle, which includes grades 4,
5, and 6. This is because some schools follow a cycle and not a grade approach in covering topics.
The simplicity of the tools created and adapted in this study have the potential to be used globally
by any educational body to assess the alignment of national curriculums with school assessments.
These tools, in comparison to other tools mentioned in the literature review, do not require intensive
training and can be easily leveraged by teachers of all levels. In sum, implications to international
policy is evident in this study as it enables curriculum developers to drive curricular change as the
learning objectives that need to be revisited or rewritten are revealed to facilitate better curricu-
lum-assessment alignment. It also assists teachers to comply with the standards of having basic com-
petencies in developing assessments and fulfil the basic requirement for the content validity. Future
research can also be extended to analyze not only assessments, but also books and materials used for
instruction and for creating the assessments. This will help the researcher to better assign the actual
cognitive level of the items and to better estimate content alignment. In this study, only the CERD
book was used; it was not possible to collect the different books and materials from 17 schools
especially as private schools use a wide variety of textbooks (Marlow-Ferguson 2002).
Disclosure statement
No potential conflict of interest was reported by the authors.
References
American Federation of Teachers, National Council on Measurement in Education, & National Education Association.
1990.Standards for Teacher Competence in Educational Assessment of Students. Washington, DC: National Council
on Measurement in Education.
Fulmer, G. W. 2011.“Estimating Critical Values for Strength of Alignment among Curriculum, Assessments, and
Instruction.”Journal of Educational and Behavioral Statistics 36 (3): 381–402. doi:10.3102/1076998610381397.
Hajo, Z. 2010.“Content Validity and Comparability of the Lebanese National Examinations in Chemistry: An Attempt to
Use the Webb Alignment Model in Lebanon.”Unpublished doctoral dissertation. University of Leicester.
La Marca, P. 2001.“Alignment of Standards and Assessments as an Accountability Criterion.”Practical Assessment,
Research, and Evaluation, 7(20), 1-6.
Marlow-Ferguson, R. 2002.World Education Encyclopedia: A Survey of Educational Systems Worldwide. 2nd ed. Farmington
Hills, MI: Gale Group.
Martone, A., and S. Sireci. 2009.“Evaluating Alignment between Curriculum Assessment, and Instruction.”Review of
Educational Research 79 (4): 1332–1361. doi:10.1016/j.im.2004.02.005.
Miller, M. D., R. L. Linn, and N. E. Gronlund. 2011.Measurement and Assessment in Teaching. 11th ed. New Jersey: Prentice-
Hall.
Nitko, A., and S. Brookhart. 2011.Educational Assessment of Students. 6th ed. Boston, MA, United States of America:
Pearson Education.
NRC (National research council). 2000.Inquiry and the National Science Education Standards: A Guide for Teaching and
Learning. Washington, DC: National Academy Press.
Osta, I. 2007.“Developing and Piloting a Framework for Studying the Alignment of Mathematics Examinations with the
Curriculum: The Case of Lebanon.”Educational Research and Evaluation 13 (2): 171–198. doi:10.1080/
13803610701452607.
12 K. EL HASSAN AND M. BAASSIRI
Resnick, L., R. Rothman, J. Slattery, and L. Vranek. 2004.“Benchmarking and Alignment of Standards and Testing.”
Educational Assessment 9 (2): 1–27.
Sleiman, L. 2012.“A Study of the Alignment between the Lebanese Secondary-level National Math Exams for the
Literature and Humanities Track and the Reformed Math Curriculum.”Master’s Thesis. http://search.shamaa.org/
FullRecord.aspx?ID=66117.
Webb, N. L. 1997.“Determining Alignment of Expectations and Assessments in Mathematics and Science Education.”
NISE Brief 1 (2): 1–8. http://facstaff.wcer.wisc.edu/normw/1997alignmentbrief.htm.
Webb, N. L. 1999.Alignment of Science and Mathematics Standards and Assessment in Four States. Research monograph
No. 8. Washington, DC: Council of Chief State School Officers.
Webb, N. L. 2002.“An Analysis of the Alignment between Mathematics Standards and Assessments for Three States.”
Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA, April.
EDUCATION 3-13 13