Methodology for evaluating a novel education technology: a case study of handheld video games in Chile.
ABSTRACT Abstract: Many school systems, in both the developed and developing world, are implementing educational technology to assist in student learning. However, there is no clear consensus on how to evaluate these new technologies. This paper proposes a comprehensive methodology for estimating the value of a new educational technology in three steps: benefit analysis, through the administration of a well-designed experiment; cost analysis, which incorporates costs to weigh against the benefits; and feasibility analysis, which introduces real-world concerns that may affect the ability to actually implement the technology. To illustrate the methodology, a case study from Chile is used where portable educational video games were introduced into first and second grade classrooms with the aim of improving learning in mathematics and language. This paper demonstrates the importance of all three steps in the evaluation process and provides a framework for future analyses. [Copyright 2006 Elsevier]
Methodology for evaluating a novel education technology:
a case study of handheld video games in Chile1
Jesse L. Margolisa, Miguel Nussbaumb,*, Patricio Rodriguezb, Ricardo Rosasc
a. Educational Consultant, 118 Mason Terrace, Brookline, MA, 02446, USA
b. School of Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
c. School of Psychology, Pontificia Universidad Católica de Chile, Santiago, Chile
* Corresponding author
Dr. Miguel Nussbaum
P. Universidad Católica
Escuela de Ingeniería
Casilla 306, Santiago 22
Email address: firstname.lastname@example.org
1 This work was partially funded by FONDEF of CONICYT.
Many school systems, in both the developed and developing world, are implementing educational
technology to assist in student learning. However, there is no clear consensus on how to evaluate these
new technologies. This paper proposes a comprehensive methodology for estimating the value of a new
educational technology in three steps: benefit analysis, through the administration of a well-designed
experiment; cost analysis, which incorporates costs to weigh against the benefits; and feasibility analysis,
which introduces real-world concerns that may affect the ability to actually implement the technology. To
illustrate the methodology, a case study from Chile is used where portable educational video games were
introduced into first and second grade classrooms with the aim of improving learning in math and
language. This paper demonstrates the importance of all three steps in the evaluation process and provides
a framework for future analyses.
Keywords: evaluation methodologies; evaluation of CAL systems; elementary education; cost-
effectiveness analysis; feasibility analysis.
1.1 Paper objectives
Currently in both developed and developing countries, there exists a great desire to infuse technology
into the educational process. This is generally done with two goals in mind. The first goal is to improve
children’s learning of traditional subjects through the use of new tools such as computers and the internet
(Burns & Ungerleider, 2002). The second is to teach children how to use the technology itself so they are
better prepared for the workforce once school ends (Potashnik & Adkins, 1996). While the second goal is
often recognized as a valuable goal in and of itself, a new educational technology initiative is frequently
expected to justify its costs based on its contribution to the first goal; that is, learning of traditional
subjects (Levin, Glass, & Meister, 1987, Solmon, 2000).
Public policy makers must frequently evaluate new educational technologies and decide whether to
implement them in the schools they oversee. The purpose of this paper is to provide a comprehensive
methodology for evaluating these new technologies in three basic steps. The first step is a benefit analysis
through which the educational benefit of the new technology is quantified. The second step is a cost
analysis that incorporates the cost of the technology to measure against the benefit. The final step is a
feasibility analysis that helps policy makers to determine if and how the technology can be implemented
given their current financial reality.
This paper is organized in six sections. In the remainder of this section, we review the literature on
technology evaluation in education, highlighting how this paper contributes to research in the field. In the
second section, we present the case study that we will be using throughout the paper to illustrate the
evaluation methodology. In sections three through five, we introduce the details of our evaluation
methodology as we apply it to the case study. In section six, we present our conclusions.
1.2 Literature review
There exists substantial scholarly research evaluating both technology-related and non-technology
related educational initiatives. Many papers focus on whether or not a given initiative has a measurable
impact on student learning or some other desirable educational goal (Word et al, 1990, Darling-
Hammond, 2000; Rosas, Nussbaum, Cumsille, Marianov, Correa & Flores et al, 2003). These studies fall
into the benefit analysis section of our three-part framework. A good example of an influential benefit
study is the Tennessee Star Project which, through a controlled experiment in the mid-1980s, found that
reducing class size leads to improved learning results in the early grades (Word et al, 1990). However, the
Star study did not incorporate costs into the analysis, and made no claim that the demonstrated learning
benefits were worth the costs, or that reducing class size was the most cost-effective method for achieving
these benefits. Nonetheless, partially based on the Star results, the state of California allocated nearly $2
billion to early-grade class-size reduction efforts in the mid-1990s (Catterall, 1997).
A number of studies have taken the next step and introduced the costs of a given educational technology
initiative to perform a cost analysis (Barnett, 1985; Levin, Glass & Meister, 1987; Solomon, 2000). Cost
analyses take two basic forms: cost-benefit analysis and cost-effectiveness analysis. In a cost-benefit
study, researchers weigh the measured benefits of an initiative against the estimated costs. If the benefits
outweigh the costs, then the initiative is deemed to be a worthwhile investment. A major challenge to
conducting a cost-benefit analysis is the necessity to convert both the benefit and the cost into the same
units, usually dollars. This challenge limits the widespread applicability of cost-benefit studies. However,
when this challenge is overcome, a cost-benefit study can be a very powerful tool for making educational
policy or investment decisions. An influential example is the cost-benefit study of the Perry Preschool
Program (Barnett, 1985, 1993; King, 1997).
The second type of cost analysis is referred to as a cost-effectiveness study. In a cost-effectiveness
study, a researcher compares both the estimated costs and benefits of various alternatives for improving
education and ranks them in order of cost-efficiency. Through this analysis, one can identify which
initiative can achieve the greatest educational benefit for a given cost, or achieve a given educational
benefit at the lowest cost. One advantage of cost-effectiveness analysis when compared to cost-benefit
analysis is that there is no need to convert costs and benefits into the same units. Despite this relaxed
restriction when compared to cost-benefit analysis, cost-effectiveness analysis is still a seldom used tool
in educational evaluation (Tsang, 1997; Rice, 1997). Levin & McEwan (2002), referencing Clune (1999),
show that of all primary and secondary education studies in the ERIC database (1991-96) that refer to
“cost-effectiveness”, only about 1% actually make a substantial attempt to carry out such a study. Most
simply refer to the term rhetorically. One influential cost-effectiveness study is Levin, Glass, & Meister’s
(1987) analysis comparing the costs and benefits of computer-assisted instruction, reduced class size, peer
tutoring, and increased instructional time.
Finally, feasibility studies compare the cost of implementing a given initiative with the resources
available. Feasibility studies are also sometimes called affordability or sustainability studies (Potashnik &
Adkins, 1996; Tsang, 1997). While policy-makers regularly perform informal (i.e. non-published)
feasibility studies to calculate whether the resources they have available can fund a particular initiative,
these studies are rarely performed in conjunction with benefit and cost analysis.
This paper contributes to the existing literature by presenting a framework for evaluating new
educational technologies in a manner that is more comprehensive than existing studies. We incorporate all
three evaluation methodologies discussed above (benefit analysis, cost analysis, and feasibility analysis)
into a series of logical steps that can be replicated to effectively evaluate a new technology. We believe
this methodology will be useful to both educational policymakers choosing between various improvement
initiatives and to educational technology producers seeking to evaluate their own technology. To present
our methodology, we use a case study of educational video games in Chile (Rosas et. al., 2003)
2. Case study description
2.1 Technology use in the Chilean primary education system
Over the last decade in Chile, a major new effort has been made to incorporate technology into both
primary and secondary schools. Principally, this has been done through the Enlaces program beginning in
1992. Enlaces has had the mission of creating “a national education technology structure linking all of
Chile’s State-subsidized primary and secondary schools.” (Ministry of Education: Enlaces, 2000). To
meet this goal, by 2002 the Ministry of Education had equipped 62% of the nation’s publicly-funded
primary schools with an internet-connected computer lab, thereby providing coverage for 96% of the
nation’s primary school students (www.redenlaces.cl). As of 2002, in Chile’s publicly funded schools
there was an average of 51 students per computer at the primary level and 31 students per computer at the
secondary level (Seminario Internacional Red Enlaces, 2002).
2.2 Sugoi description
As a case study, we will use a set of five educational video games, collectively known as Sugoi, which
were developed for the Nintendo Game Boy platform (Rosas et al., 2003). The video games teach basic
Spanish language and mathematics skills to children ranging from kindergarten to second grade. Sugoi
was developed with the belief that it provided two key benefits over the existing software and hardware
provided to schools by Enlaces. First, because Sugoi is run on a portable platform, students and teachers
could remain in the more familiar classroom environment while using the technology, rather than going to
a general-use computer lab. Secondly, with Sugoi, each student used a machine individually during the
class, rather than sharing a machine with 2 to 3 other students, as is the norm in an Enlaces computer lab.
The software and game cartridges were developed between 1997 and 2000 at the Pontificia Universidad
Católica de Chile, through collaboration between the departments of Computer Science and Psychology.
The university and the FONDEF2 technology development fund of the Government of Chile sponsored
the development of the video games (Rosas et al., 2003).
3. Benefit analysis
3.1 Experiment description
In 2000, a controlled experiment was run to evaluate the effect of the video games on basic language
and math learning for first and second grade children. The methodology and results of this experiment
have been reported in detail by Rosas et al. (2003), and we will only briefly summarize them here to show
how they fit into the overall evaluation model. While Roses et al. discuss numerous educational benefits
of video game use, we will focus on student test scores as a measure of learning for the purpose of
presenting our methodology. While we do not pass judgment on the relative value of different educational
indicators, we focus on test scores for two practical reasons. First, student test scores are a commonly
reported measure of learning in educational studies, facilitating their use as an indicator for a comparative
analysis. Second, as the importance of student testing has grown in the U.S. and abroad, the effect of a
new technology on student test scores is an increasingly relevant criterion for public policy makers
deciding between alternative educational interventions. .
In the Roses et al. study, six schools were selected as experimental schools and each counted with at
least one Experimental Group (EG) classroom that used the video games and one Internal Control Group
(ICG) classroom that did not. All schools were considered to principally have students of low
socioeconomic status. Three schools were urban schools and three schools were rural.
Four additional schools were selected as an External Control Group (ECG). These schools were
generally similar to the experimental schools in terms of academic achievement (measured through a
national achievement test: SIMCE), socioeconomic background, and level of vulnerability (measured by
2 Fondo de Fomento al Desarrollo Científico y Tecnológico
the Ministry of Education of Chile). Two of the ECG schools were urban and two were rural. No
professionals at the ECG schools were aware of the Sugoi experiment or their relation to it (Rosas et al.,
2003). The breakdown of students into EG, ICG, and ECG is shown in Table 1.
Table 1 – Experimental groups
758 students in 19 classes who played experimental
video games over 12 weeks, 20 to 40 minutes daily
during regular class hours, alternating between
language and mathematical content.
supervisor, and head teachers
347 students in 9 classes in the same schools as the
experimental group who did not play video games.
The teachers taught regular classes but knew they
were being assessed as the internal control group.
169 students belonging to 4 schools who did not
have any knowledge of the experiment, but were
assessed with both the pre and post-experiment tests.
Source: Rosas et al., 2003
Students in EG classrooms played with the games for an average of 30 minutes a day over a 12-week
period. At the end of the experiment, each EG student had played with Sugoi for a total of 30 hours.
Students in the EG, ICG, and ECG all took basic language and mathematics assessment pre-tests before
beginning the three-month experiment, and post-tests upon completing the experiment. To explain the
tests, ECG school directors and teachers were simply told they were participating in a university research
project, though they had no knowledge of the specific project or their relation to it. Additionally,
classroom observers were sometimes present in the EG and ICG classrooms for both Sugoi and non-
Sugoi sessions to document student behavior and classroom dynamics.
3.2 Experiment results
In both math and spelling, the use of video games in the EG had a significant impact on post-test scores
controlling for pre-test ability, when using the ECG for comparison. For reading comprehension, the
effect was in the expected direction, but not statistically significant.
No significant differences in performance were found between the EG and the ICG in any of the three
testing areas. These results are summarized in Tables 2 and 3 (Rosas et al., 2003).
Table 2 – Results vs. External Control Group
Score Difference (EG – ECG)
Std. Error of Assessment Test
* Significant at the 0.05 level
Table 3 – Results vs. Internal Control Group
Score Difference (EG – ICG)
Std. Error of Assessment Test
* Significant at the 0.05 level.
In Tables 2 and 3, effect sizes were also calculated for each area in which the results were statistically
significant. Effect sizes were calculated by dividing the estimated size of the difference in the test scores
by the standard error of the assessment test itself. This was done to normalize the impact sizes in a way
that allows for comparison across studies that use different assessment tests (Mosteller, 1995).
3.3 Control Group Selection
From the results presented in Tables 2 and 3, it is clear that our interpretation of the experimental results
will depend heavily on which control group (internal or external) we decide to use. In math and spelling,
the Experimental Group had a statistically significant improvement when compared to the External
Control Group. In reading comprehension, the results were in the expected direction, but not statistically
significant. However, when compared to the Internal Control Group, the Experimental Group did not
show a statistically significant improvement in any subject area.
Each control group has arguments in favor of its use. From one perspective, the ICG appears to be more
appropriate for comparison with the EG, because the ICG, by definition, controls for a number of
potentially biasing school-specific factors (school principal, neighborhood, socioeconomic status, etc.).
While these factors are partially controlled for in the ECG through the selection of schools with similar
measurable characteristics, their influence can never be completely neutralized. However, from another
perspective, the ECG may be a more appropriate control group to use because it is external to certain
experimental effects that are known to bias educational studies (Adair, Sharpe & Huynh, 1989). In fact, in
Rosas et al. (2003), the authors specifically mention observing a John Henry Effect (Keeves, 1995) as
ICG teachers actively competed with EG teachers, potentially biasing upward the testing results of the
internal control group.
Given the evidence available, it is inconclusive whether the internal or external control group is more
appropriate for comparison with the experimental group. Since the purpose of our paper is to present an
evaluation methodology, and not pass judgment on the relative value of video games when compared to
other educational interventions, we will select the ECG results for use in this study. It is important to
emphasize that the results that follow are based on this assumption, which has been made for convenience
in presenting an evaluation methodology, and not based on the relative merits of the ECG versus the ICG.
4. Cost analysis – cost-effectiveness model
Based on our decision to use the ECG as the control group, we will be using the results shown in Table
2. As seen in Table 2, the EG achieved significantly higher gains than the ECG in both mathematics and
spelling. The reading comprehension score improvement for the EG was also higher, but not at a
statistically significant level, so it will not be included here.
Our cost analysis will take the form of a cost-effectiveness analysis, as described by Levin & McEwan
(2002). First, we will select alternative educational improvement methods to compare with Sugoi. Then,
we will weigh the measured benefits and the estimated costs of each of the methods versus one another,
so as to determine which is the best investment.
4.2 Alternative methods for comparison
We select two alternative educational improvement methods for comparison with Sugoi: Class Size
Reduction (CSR) and teacher training.
We choose CSR for several reasons. First, CSR proposals are currently common in many public school
systems in the United States. While CSR is less discussed in Chile at the moment due to its very high
cost, it seems likely that it will become a significant issue for Chile in the future, especially given the
country’s relatively high average class size of 34 students (Bellei, 2001). Second, there are a number of
high quality experimental studies performed on CSR, most notably the Tennessee Star project (Word et
al., 1990), which have a similar format to the Sugoi study.
We choose teacher training because it is an initiative currently sponsored by the Government of Chile.
At all municipal schools, a teacher’s salary is partially determined based on the number of ongoing
training courses the teacher has completed. Therefore, teachers have a strong financial incentive to enroll
in training programs, and the cost of their subsequent salary increases is born by both the national and
4.3 Effectiveness analysis
To present our methodology, the specific study we use for class size reduction is the Tennessee Star
project run in the U.S. State of Tennessee between 1986 and 1989 (Word et al., 1990), referred to by
Hanushek (1997) as one of a “handful of significant educational experiments over the past quarter
century.” In the project, researchers used a randomized controlled experiment to measure the benefit of
reducing class size in grades K-3 from an average of 24 students to an average of 15 students, a 38%
drop. The study used a within school design, so each experimental group (15 students on average) was
matched with an internal control group (24 students on average). Each year students took tests on basic
mathematics and language skills and researchers calculated the difference between the experimental and
control group scores. After dividing by the standard deviation of the test, this provides the effect size
reported in Table 5 (Finn & Achilles, 1999)
The study we use for teacher training is The Effectiveness of Special Interventions in Latin American
Public Primary Schools (Anderson, 2002). This study was not done in an experimental format, but
instead as a cross-school statistical analysis of several educational interventions common in Argentina,
Brazil, Chile, and Mexico. In the study, 4th grade student scores on a standardized test were compared
across schools with and without an ongoing teacher training program. The researchers normalized the test
scores so that the mean was 50 and the standard deviation 10. For each year that an ongoing teacher-
training program had existed at a student’s school, that student’s mathematics score increased by an
average of 0.47. For the purpose of this comparison, we will interpret this result to mean that the annual
value of a year of teacher training is a 0.47 improvement in a student’s score. Dividing by the standard
deviation of 10, this leads to an annual effect size of 0.047 in mathematics. No significant effect was
found for language scores. This result was for students in schools in poor neighborhoods, the same subset
of students studied in the Sugoi study. Results are summarized in Table 4.
Table 4 – Annual benefit summary
Class Size Reduction
Portable Video Games
Effect Size – Math
Effect Size – Language
Effect Size – Average
a. No significant results found
For our comparison, one important assumption we are making is that the effects observed in the class size
reduction and teacher training studies would be valid in a Chilean setting. In any cost-effectiveness study
where the experiments to be compared are not performed on identical populations, the validity of this
assumption will affect the validity of the results obtained. In our selection of comparable studies, we have
attempted to minimize differences from the Sugoi experiment: the Tennessee Star experiment was
performed with children at the same grade levels, and the teacher training study was based on results with
children of a similar socio-economic status in Chile and three other Latin American countries. However,
some differences may remain, and it is important to emphasize the selection of comparable studies is a
critical feature affecting the validity of any evaluation study following this methodology.
4.4 Cost estimations
Class size reduction
For a class size reduction program, there would likely be two major cost components: personnel costs
for additional teachers and infrastructure costs for additional classrooms. In Chile, the current average
annual teacher salary of $6,926 (Raczynski, 2001) and the average class size of 34 lead to an annual
teacher cost per student of $204. Reducing the average class size by 38% as in the Tennessee Star study
would raise this average teacher cost per student by $122 to $326 per year. This $122 increase is reflected
in the first row of Table 5.
To estimate the annual additional classroom cost, we have made three important assumptions. We have
assumed a new classroom building cost of just over $28,500. We have assumed a 20-year useful life of
the classroom for depreciation. And, we have assumed that only 50% of schools would actually require
new classrooms, as some schools are operating under-capacity or could make do with existing
infrastructure. With these assumptions, the annualized additional classroom cost is $14 per student. The
critical point to note here is that we have calculated an annualized cost. From a modeling perspective, this
is the correct thing to do, since we want all modeled costs to be in annualized terms so they are
comparable. However, from a real world perspective, new classrooms must be paid for in year 1, and
raising this money can pose a significant challenge to class size reduction programs.
Personnel and infrastructure costs are combined in Table 5, giving an estimated class size reduction cost
in Chile of $136 per student.
Table 5 – Annual cost estimate for class size reduction
38% reduction in class size
$6,926 average annual teacher salary (Raczynski, 2001)
Currently 34 students per class (Bellei, 2001)
$28,571 one-time cost for new classroom
50% of schools would require new classrooms
Sum of above numbers
Note: exchange rate of $700 pesos / US$1 used for teacher salary and classroom cost estimates.
The annual cost of an ongoing teacher-training program in Chile is estimated to be approximately US
$12 per student as shown in Table 6. This is based on Schiefelbein, Wolff and Shiefelbein’s (1998)
estimate that an ongoing teacher-training program raises per student costs by approximately 2.3%.
Table 6 – Annual cost estimate for ongoing teacher-training
Cost Item Assumption
Current Annual Cost per
% Increase in Cost per
Total Cost per Student
a. Annual cost per student assumes student attends a municipal school. See Table 10 for details.
Ministry of Education (2001)
See Table 10 for a detailed estimate
Schiefelbein, Wolff and Shiefelbein’s (1998)
Product of above numbers
Portable video games (Sugoi)
Finally, to estimate the costs of the video game system, there are three main groups of costs we must
consider. The first group includes one-time costs, such as the initial training provided to teachers on the
use of the new technology. If the entity creating the video games is a private company, sales costs should
also be included in this category. An important point to stress here is that these are annualized values of
one-time costs, much like the infrastructure costs in the case of class size reduction. In this case, we have
assumed that the average school keeps the video game system for five years, implying that we should
divide the total one-time cost of initial training and sales by five in order to spread the cost of these goods
over their useful life.
The second group of costs includes those that can be directly attributed to each school, called “Ongoing
Direct Costs” in Table 8. In the case of Sugoi, these would be the cost of the machines and the cost of
ongoing training and technical support required by the school.
Finally, the third group of costs, “Allocated Costs” includes those general costs that go with running an
organization to produce the games. These include Research & Development (R&D) costs to develop and
program the games, and General & Administrate (G&A) costs to run the organization. These costs would
not be specifically attributable to any particular school, but would be allocated evenly (or by some
appropriate weighting factor) across all schools. The distinguishing feature of costs in this category is that
they represent the overall costs of running the organization, and generally do not increase as each
additional school is added.
Overall, the total cost of implementing a system of portable video games in Chile is estimated to be
approximately $50 per student annually, as shown in Table 7.
Table 7 – Annual cost estimate for portable video games
204 students per school who use Sugoi (6 classes with
34 students each)
60 hours of training required at $8.57 per hour
5-year video game retention
10% of total annual non-sales cost
40 machines per school
2.5 year machine & cartridge life-span
$200 per machine & cartridge
100 hours of ongoing training annually at
50 hours of tech. Support annually at $9.43/hour
5000 man-hours per new game at $18.86 per hour
5 year game life-span
5 games per school
50 schools implemented
50 schools implemented
One-time costs (annualized)
Initial teacher training
Sales & Marketing $4.56
Ongoing Direct Costs
Ongoing training & technical
Research & Development
General & Administrative
Source: Author estimates based on actual Sugoi costs in pilot-projects. Exchange rate of 700 pesos / US$1 used for cost estimates.
4.5 Cost-effectiveness analysis
The effectiveness and cost data from above are summarized in Table 8. The effectiveness/cost ratio is
calculated to standardize effectiveness per unit of cost.
Table 8 – Effectiveness / cost ratios
Class Size Reduction Ongoing Teacher
Portable Video Games
Effectiveness / Cost
From Table 8, we see that Class Size Reduction has the lowest effectiveness/cost ratio (0.0012) and
Portable Video Games have the highest effectiveness/cost ratio (0.0047). With an effectiveness/cost ratio
of 0.0019, Ongoing Teacher Training is in between the other initiatives in terms of cost-effectiveness.
These results imply that spending $1 on a portable video game system would provide more than twice the
learning benefit of spending $1 on an ongoing teacher-training program and nearly four times the benefit
of spending $1 on a class size reduction program.
Based on our model assumption that the external control group is the more appropriate control group for
comparison in the Sugoi study, our methodology would imply that portable video games were a cost-
efficient way to improve education in Chile. However, in order to take advantage of this cost-efficiency,
schools in Chile would have to purchase video games at a cost that is feasible given their current financial
reality, which we will explore in the next section.
5. Feasibility analysis
In the prior section we saw that, based on certain model assumptions, the first two steps in our
methodology would imply that a system of portable video games is the most cost-effective of the three
options studied for improving education. However, at approximately $50 per student, it is not the least
expensive option examined. In this section we will evaluate the feasibility of schools spending an
additional $50 per student annually necessary to implement the video game system. To do this, we will
look at the current level of spending on education in Chile, both from the schools themselves, and the
national Ministry of Education.
5.2 School spending
In Chile, primary education is composed of kindergarten though eighth grade and included 2,361,721
students in 2001. These students attended one of three types of primary schools: Municipal (MN), Private
Subsidized (PS), or Private Paid (PP). MN and PS schools combined represent 92% of all primary-school
students and are principally funded by a per-student subsidy from the national government. PP schools,
representing 8% of students, are exclusively funded by parent tuition payments and other private sources.
Table 9 shows the approximate annual funding per student in each of the three types of schools.
Table 9 – School financing
National Government Subsidya
Shared Parent Financingc
Total Income per Student
a. Data from the Ministry of Education of Chile, Estadísticas de la Educación 2001.
b. Most municipalities supplement the subsidy received from the national government for schools under their jurisdiction. Data from
the Ministry of Education of Chile, Estadísticas de la Educación 2001.
c. Private subsidized schools are allowed to charge parents a small monthly fee (called financiamiento compartido) without losing
the national government subsidy. 1999 estimate from CENDA (CEP) Estudio del financiamiento de la educación en Chile, April
d. 1999 estimate from CENDA (CEP) Estudio del financiamiento de la educación en Chile, April 2002
Private Subsidized (36%)
Private Paid (8%)
Table 10 makes the assumption that total annual school funding is equal to total annual spending and
shows an estimate of the portion of that spending that goes to non-personnel costs for each type of school.
Table 10 – School spending
Source: Author analysis based on data from the Ministry of Education and sample data from 30 municipal schools
From Tables 9 and 10, we can see that municipal schools, representing 56% of the student population,
spend an average of $532 per student each year, with approximately 91% of this spending going towards
personnel salaries. Forty-eight dollars remain for spending on non-personnel items, including
infrastructure maintenance, utilities, transportation, and educational materials. Private Subsidized schools
spend a higher dollar amount, approximately $188, on non-personnel items. Finally, at approximately