BookPDF Available

Quantitative Analysis of Data from Participatory Methods in Plant Breeding

Authors:

Abstract and Figures

Although participatory plant breeding (PPB) is gaining greater acceptance worldwide, the techniques needed to analyze the data from participatory methodologies in the context of plant breeding are still not well known or understood. Scientists from different disciplines and cropping backgrounds, working in international research centers and universities, discussed and exchanged methods and ideas at a workshop on the quantitative analysis of data from participatory methods in plant breeding. The papers in this volume address the three themes of the workshop: designing and analyzing joint experiments involving variety evaluation by farmers; identifying and analyzing farmers’ evaluations of crop characteristics and varieties; and dealing with social heterogeneity and other research issues. Topics covered included different statistical methodologies for analyzing data from on-farm trials; the mother-baby trial system, which is designed to incorporate farmer participation into research; the identification and evaluation of maize landraces by small-scale farmers; and a PPB process that aims to address the difficulties of setting breeding goals and choosing parents in diversity research studies. Summaries of the discussion, as well as the participatory breeding work currently conducted by the participants, are provided.
Content may be subject to copyright.
145
Quantitative Analysis
of Data from Participatory
Methods in Plant
Breeding
MAURICIO R. BELLON AND JANE REEVES, EDITORS
INTERNATIONAL RICE RESEARCH INSTITUTE
P
ARTICIPATORY
R
ESEARCH
AND
G
ENDER
A
NALYSIS
International Maize and Wheat
Improvement Center
CIMMYT® (www.cimmyt.org) is an internationally funded, nonprofit, scientific research and
training organization. Headquartered in Mexico, CIMMYT works with agricultural research
institutions worldwide to improve the productivity, profitability, and sustainability of maize and
wheat systems for poor farmers in developing countries. It is one of 16 food and environmental
organizations known as the Future Harvest Centers. Located around the world, the Future
Harvest Centers conduct research in partnership with farmers, scientists, and policymakers to
help alleviate poverty and increase food security while protecting natural resources. The centers
are supported by the Consultative Group on International Agricultural Research (CGIAR)
(www.cgiar.org), whose members include nearly 60 countries, private foundations, and regional
and international organizations. Financial support for CIMMYT’s research agenda also comes
from many other sources, including foundations, development banks, and public and private
agencies.
Future Harvest® builds awareness and support for food and environmental research for a world
with less poverty, a healthier human family, well-nourished children, and a better environment.
It supports research, promotes partnerships, and sponsors projects that bring the results of
research to rural communities, farmers, and families in Africa, Asia, and Latin America
(www.futureharvest.org).
© International Maize and Wheat Improvement Center (CIMMYT) 2002. All rights reserved. The
opinions expressed in this publication are the sole responsibility of the authors. The designations
employed in the presentation of materials in this publication do not imply the expression of any
opinion whatsoever on the part of CIMMYT or its contributory organizations concerning the
legal status of any country, territory, city, or area, or of its authorities, or concerning the
delimitation of its frontiers or boundaries. CIMMYT encourages fair use of this material. Proper
citation is requested.
Correct citation: Bellon, M.R., and J. Reeves (eds.). 2002. Quantitative Analysis of Data from
Participatory Methods in Plant Breeding. Mexico, DF: CIMMYT.
Abstract: Although participatory plant breeding (PPB) is gaining greater acceptance worldwide,
the techniques needed to analyze the data from participatory methodologies in the context of
plant breeding are still not well known or understood. Scientists from different disciplines and
cropping backgrounds, working in international research centers and universities, discussed and
exchanged methods and ideas at a workshop on the quantitative analysis of data from
participatory methods in plant breeding. The papers in this volume address the three themes of
the workshop: designing and analyzing joint experiments involving variety evaluation by
farmers; identifying and analyzing farmers’ evaluations of crop characteristics and varieties; and
dealing with social heterogeneity and other research issues. Topics covered included different
statistical methodologies for analyzing data from on-farm trials; the mother-baby trial system,
which is designed to incorporate farmer participation into research; the identification and
evaluation of maize landraces by small-scale farmers; and a PPB process that aims to address the
difficulties of setting breeding goals and choosing parents in diversity research studies.
Summaries of the discussion, as well as the participatory breeding work currently conducted by
the participants, are provided.
ISBN: 970-648-096-X
AGROVOC descriptors: Maize; Plant breeding; Agronomic characters; Research; Methods;
Quantitative analysis; Data analysis; Statistics; Evaluation; Scientists; Small farms; Research
institutions
AGRIS category codes: U10 Mathematical and Statistical Methods;
F30 Plant Genetics and Breeding
Additional keywords: CIMMYT, participatory plant breeding
Dewey decimal classification: 633.1553
Printed in Mexico.
Contents
iv Preface
Mauricio R. Bellon
vi Acknowledgements
vii Acronyms and Abbreviations
1Participatory On-Farm Technology Testing: The Suitability of Different
Types of Trials for Different Objectives
Steven Franzel and Richard Coe
9Quantifying Farmer Evaluation of Technologies: The Mother and Baby
Trial Design
Sieglinde Snapp
18 Analyzing Data from Participatory On-Farm Trials
Richard Coe
36 Sources of Variation in Participatory Varietal Selection Trials with Rainfed Rice:
Implications for the Design of Mother-Baby Trial Networks
Gary Atlin, Thelma Paris, and Brigitte Courtois
44 Analyzing Ranking and Rating Data from Participatory On-Farm Trials
Richard Coe
66 Analysis of the Demand for Crop Characteristics by Wealth and Gender: A
CaseStudy from Oaxaca, Mexico
Mauricio R. Bellon
82 Identifying Farmers’ Preferences for New Maize Varieties in Eastern Africa
Hugo De Groote, Moses Siambi, Dennis Friesen, and Alpha Diallo
104 Participatory Plant Breeding: Setting Breeding Goals and Choosing Parents for
On-Farm Conservation
Bhuwon Sthapit, Krishna Joshi, Sanjay Gyawali, Anil Subedi, Kedar Shrestha,
Pasupati Chaudhary, Ram Rana, Deepak Rijal, Madhusudhan Upadhaya, and
Devra Jarvis
113A Quantitative Method for Classifying Farmers Using Socioeconomic Variables
José Crossa, Mauricio R. Bellon, and Jorge Franco
128 Appendix 1. Current Participatory Breeding Projects Conducted by the Centers
Represented at the Workshop
144 Appendix 2. Workshop Participants
Preface
The papers in this volume were presented at a workshop “Quantitative Analysis of
Data from Participatory Methods in Plant Breeding”, held at the Castle of
Rauischholzhausen Conference Center of the Justus Liebig University, Germany,
23-25 August 2001. The workshop was initiated by scientists within the
Consultative Group on International Agricultural Research (CGIAR) who wished
to review and discuss different quantitative techniques for analyzing data
generated through participatory methodologies in the context of plant breeding.
Participatory plant breeding (PPB) is gaining wider acceptance worldwide—it is
increasingly being used within the CGIAR—and its merits and limitations are
beginning to be better understood. Many scientists involved in these efforts,
however, have realized that the quantitative techniques needed to analyze the data
from the participatory methodologies used in PPB are still not well known or
understood by many practitioners. Further discussion and exchange of methods
and ideas are needed.
The workshop was organized by the International Maize and Wheat Improvement
Center (CIMMYT) and the Justus Liebig University. It was sponsored by CIMMYT,
the International Rice Research Institute (IRRI), the CGIAR Program on
Participatory Research and Gender Analysis for Technology Development and
Institutional Innovation, and other participating CGIAR Centers. Experts from
outside the CGIAR were also involved.
Scientists from different disciplines (breeders, social scientists, biometricians, and
agronomists) and cropping backgrounds (maize, rice, potatoes, cassava, sorghum,
barley, and agroforestry) were brought together for the workshop. All participants
were experienced in participatory plant breeding and had also worked in
interdisciplinary teams. A total of 21 scientists took part, representing 10 CGIAR
Centers: CIMMYT, IRRI, CIAT (the International Center for Tropical Agriculture),
CIP (the International Potato Center), the International Centre for Research in
Agroforestry (ICRAF), the International Center for Research in the Dry Areas
(ICARDA), the International Institute of Tropical Agriculture (IITA), the West
Africa Rice Development Association (WARDA), the International Plant Genetic
Resource Institute (IPGRI), and the International Center for Research in the Semi-
Arid Tropics (ICRISAT). In addition there were participants from Justus Liebig
University, University of Wales, and Michigan State University.
The workshop was organized around three themes:
Designing and analyzing joint experiments involving variety evaluation by farmers
Identifying and analyzing farmers’ evaluations of crop characteristics and varieties
Dealing with social heterogeneity and other research issues
iv
Lectures and case studies, followed by discussion, were devoted to each theme. This
format facilitated the learning of specific methodologies, the sharing of well-defined
examples, and a free exchange of ideas and experiences among participants.
This publication presents both the papers from the workshop and the ensuing
discussions. There are two types of papers: one focusing on methodologies, the other
on case studies. In the first methodology paper, Steven Franzel and Richard Coe
discuss the suitability of different types of trials for participatory on-farm technology
testing. Richard Coe’s two papers deal with statistical methodologies. His first paper
focuses on the analysis of data from on-farm trials, while the second analyses the
ratings and rankings commonly used in PPB to assess the value of traits and the
performance of different varieties from the farmer’s perspective. Sieglinde Snapp
describes a methodology called the mother-baby trial system, which is designed to
incorporate farmer participation in research. Although the methodology presented is
in the context of soil fertility management, it is still relevant to PPB, as shown in the
papers by Gary Atlin and colleagues and Hugo de Groote and colleagues. The paper
by José Crossa and colleagues shows a statistical methodology for grouping farmers
into homogenous groups. Among the case study papers, Gary Atlin and colleagues
discuss the analysis of the mother-baby trial scheme in participatory varietal selection
and its implications for rice breeding. Mauricio Bellon details a case study on the
identification and evaluation of maize landraces by small-scale farmers in Mexico.
Hugo de Groote and colleagues present the results of a study that aimed to identify
farmers’ criteria for assessing and evaluating maize varieties in Eastern Africa.
Bhuwon Sthapit describes the preliminary results of participatory plant breeding
processes that address the difficulty of setting breeding goals and choosing parents in
diversity research studies.
The resulting discussions are summarized at the end of each paper. Also, all
participants provided a one-page summary highlighting the participatory breeding
work currently being conducted by their respective research centers. These
summaries are presented in Appendix 1.
I believe that these proceedings will be useful to all practitioners of participatory
plant breeding.
Mauricio R. Bellon
CIMMYT Economics Program
v
Acknowledgements
Funding for the workshop was provided by the International Maize and Wheat
Improvement Center, the International Rice Research Institute, and the CGIAR
Program on Participatory Research and Gender Analysis for Technology Development
and Institutional Innovation, which also funded the production of these proceedings.
We also express our thanks to Prof. Ernst August Nuppenau from the Institut für
Agrarpolitik und Marktforschung, Justus-Liebig University, for his invaluable support
in hosting the workshop at the Castle of Rauischholzhausen Conference Center, and to
Monika Zurek for her logistical help in organizing the workshop. Finally, we thank
Marcelo Ortiz and Miguel Mellado for the design and layout.
vi
Acronyms and Abbreviations
CBR Community biodiversity register
CGIAR Consultative Group on International Agricultural Research
CIAT International Center for Tropical Agriculture
CIMMYT International Maize and Wheat Improvement Center
CIP International Potato Center
CRURRS Central Upland Rice Research Station
DFID Department for International Development, UK
DGIS Directorate-General for International Cooperation, the Netherlands
FAO Food and Agriculture Organization of the United Nations
GEI Genotype x environment interaction
ICAR Indian Council of Agricultural Research
ICARDA International Center for Research in the Dry Areas
ICRAF International Centre for Research in Agroforestry
ICRISAT International Center for Research in the Semi-Arid Tropics
IDRC International Development Research Centre, Canada
IFAD International Fund for Agricultural Development
IITA International Institute of Tropical Agriculture
IM Independent mixture
IPGRI International Plant Genetic Resources Institute
IRRI International Rice Research Institute
LM Location model
masl Meters above sea level
ML Maximum likelihood
MLM Modified location model
MVN Multivariate normal
NARC Nepal Agricultural Research Council
NGO Nongovernmental organization
PPB Participatory plant breeding
PRA Participatory rural appraisal
PRGA Participatory Research and Gender Analysis Program
PVS Participatory varietal selection
RCB Randomized complete block
RUA Rajendra Agricultural University
SARRNET Southern Africa Root Crops Research Network
TE Target environment
UPGMA Unweighted pair group arithmetic averaging
WARDA West Africa Rice Development Association
vii
Participatory On-Farm Technology
Testing: The Suitability of Different
Types of Trials for Different
Objectives
S
TEVEN
F
RANZEL
AND
R
ICHARD
C
OE
Introduction
In participatory on-farm evaluation,
farmers take a lead role in the design,
implementation, and evaluation of
technology. This paper outlines
objectives for conducting on-farm trials
and presents a typology for classifying
on-farm trials, focusing on how different
types of trials may be used to meet
different objectives. Some main issues in
the management of different types of
trials are also discussed. This paper
draws heavily on Franzel et al. (2002a).
Objectives of On-Farm
Experimentation
On-farm experimentation has several
different objectives. First, it permits
farmers and researchers to work as
partners in the technology development
process. The more often and the earlier
that farmers are involved in the
technology development process, the
greater the probability that the practice
will be adopted. On-farm trials are
important for ascertaining farmers’
Abstract
This paper outlines objectives for conducting on-farm trials and presents a typology for
classifying on-farm trials, focusing on how different types of trials may be used to meet
different objectives. It presents the rationale for conducting on-farm trials, the main
elements of participatory technology development, and a classification for on-farm
experiments based on the degree of control over the experiment by scientists and
farmers. The classification recognizes three types of trials, depending on the objectives of
the trial, who designs it, and who manages it. Type 1 trials are researcher designed and
managed and their objective is to assess biophysical responses. Type 2 trials are
researcher designed and farmer managed, e.g., farmers agree to implement a common
design. It is useful to get farmer feedback on specific prototypes or for conducting
economic analyses. Type 3 trials are farmer designed and managed where farmers can
experiment on their own. The objective of this type of trial is to assess farmer
innovation and acceptability.
2
assessments of a practice and their ideas
on how it may be modified and for
observing their innovations.
Assessments are likely to vary and may
be associated with particular biophysical
(e.g., soil type) or socioeconomic (e.g.,
wealth status) circumstances. Farmers’
innovations often serve as a basis for
new research or for modifying
recommendations (Stroud 1993; van
Veldhuizen et al. 1997).
Secondly, on-farm testing is useful for
evaluating the biophysical performance
of a practice under a wider range of
conditions than is available on-station.
This is especially important because soil
type, flora, and fauna on research
stations are often not representative of
those found on farms in the surrounding
community.
Thirdly, on-farm trials are important for
obtaining realistic input-output data for
financial analysis. Financial analyses
conducted on on-station experiments
differ from those conducted on farm
trials because (1) yield response is often
biased upward, (2) estimates of labor
used by station laborers on small plots
are unrepresentative of the farming
community, and (3) operations often
differ, e.g., when tractors instead of oxen
or hoes are used for preparing land.
And finally, on-farm testing provides
important diagnostic information about
farmers’ problems. Even if diagnostic
surveys and appraisals have already
been conducted, researchers can still
learn a great deal about farmers’
problems, preferences, and livelihood
strategies from interacting with them in
on-farm trials. Trials have important
advantages over surveys in that they are
based on what farmers do rather than on
what they say.
Types of On-Farm Trials
On-farm trials can provide critical
information for determining the
biophysical performance, profitability, and
acceptability of agroforestry, i.e., its
adoption potential. However, the design
of a trial depends on its specific objectives.
Assessment of biophysical performance
requires biophysical data on the products
and services that the technology is
planned to produce. These are likely to
change with different adaptations of the
technology as might occur if farmers were
asked to manage them. To prevent such
possible variation, trials designed to
assess biophysical performance should be
controlled to replicate specific technology
designs. The trials should also be
implemented in a way that farmers’
willingness and ability to establish and
maintain the trials does not affect the
outcome. Thus trials to assess biophysical
performance need a high degree of
researcher control in both design and
implementation.
The assessment of profitability requires
biophysical data (to estimate returns) that
must be generated from standardized
experiments. However, the financial
analysis also requires realistic input
estimates, of which labor poses most
difficulties. Realistic data can only be
obtained if farmers manage the trials to
their own standards. Thus profitability
objectives require trials in which
researchers have considerable input into
the design but farmers are responsible for
implementation. The objectives of
assessing feasibility and acceptability
require data on farmers’ assessments and
adaptations of the technology. These can
only be assessed if farmers are left to
experiment with little researcher
involvement.
Steven Franzel and Richard Coe
3
There are many different ways of
classifying on-farm trials (Okali et al.
1994). The differing requirements of the
objectives of biophysical performance,
profitability, and acceptability mean it
is helpful to classify trials according to
the balance of researcher and farmer
involvement in their design and
implementation. The classification
described in this paper involves three
types of trials and draws upon
Biggs(1989).
Type 1: Trials designed and
managed by researchers
Type 1 trials are simply on-station trials
transferred to farmers’ fields. They are
useful for evaluating biophysical
performance under farmers’ conditions,
and require the same design rigor as on-
station research with regard to treatment
and control choice, plot size, replication,
and statistical design. In the design stage,
researchers need to consult the farmer on
the site’s homogeneity and history. If
possible they should observe a crop in
the field before establishing a trial.
Because type 1 trials take place on
farmers’ fields, trial results are generally
more representative of farmers’
biophysical conditions than on-station
trials (Shepherd et al. 1994). More
accurate information may be obtained on
interactions between the biophysical
environment and management, e.g., how
different species in an improved fallow
trial compare on different soil types.
Type 1 trials are usually more expensive
and more difficult to manage than on-
station trials; they often involve renting
land from farmers and using laborers
from the station to implement them.
Farmers’ assessments are an important
objective of type 1 trials; as with on-
station trials, it is useful to get farmers’
feedback on the different treatments
(Sperling et al. 1993; Franzel et al. 1995).
Type 2: Trials designed by
researchers and managed by
farmers
Here, farmers and researchers
collaborate in the design and
implementation of the trial. The trial is
labeled “researcher designed” because it
follows the conventional scientific
approach to conducting an experiment:
one or more test treatments are laid out
in adjacent plots and compared to one or
more control treatments. Researchers
consult farmers on the design of the trial
and each farmer agrees to follow the
same prototype (or chooses one of
several possible prototypes), so that
results may be compared across farms.
Farmers are responsible for conducting
all of the operations in the trial.
In type 2 trials, reliable biophysical data
over a broad range of farm types and
circumstances are sought. The trials also
facilitate the analysis of costs and
returns; inputs, such as labor, and
outputs, such as crop yields, are
relatively easy to measure because plot
size is uniform and known. The trials are
also useful for assessing farmers’
reaction to a specific practice and its
suitability to their circumstances.
Farmers are encouraged to visit each
other’s trials and to conduct group field
days to assess the practice at different
growth stages.
Type 3: Trials designed and
managed by farmers
In type 3 trials, farmers are briefed about
new practices through visits to field
stations or on-farm trials. They then
plant and experiment with the new
Participatory On-Farm Technology Testing
4
practices as they wish. They are not
obliged to plant in plots or to include
control plots. Researchers monitor the
farmers’ experiments, or a subsample of
them, focusing in particular on their
assessment of the new practice and their
innovations. In addition farmer-to-
farmer visits and meetings are useful for
farmers to compare their experiences
and assessments with others. Any
farmers experimenting with a new
practice could be said to have a type 3
trial, regardless of whether they
obtained planting material and
information from researchers, other
facilitators, or other farmers. This hands
off approach, which assumes that
farmers best know how to test a new
practice on their own farms, is
supported by some in the literature
(Lightfoot 1987). Others emphasize
training farmers to conduct trials
following scientific principles, such as
replication and non-confounding of
treatments (Ashby et al. 1995).
Suitability of Trial Types
for Meeting Objectives
The suitability of the different trial types
for differing objectives is summarized in
Table 1. Suitability involves both the
appropriateness of the trial for collecting
the information and the ease with which
it can be collected. Different types of
trials are suited to different types of
analyses. Biophysical measurements are
most meaningful in types 1 and 2 trials;
they are less useful in type 3 trials
because each farmer may manage the
practice in a different manner. Type 2
trials are well suited for collecting
parameters (e.g., labor use) for financial
analysis; such data are difficult to collect
in type 3 trials because plot size and
management vary. The data can be
collected in type 1 trials but will be less
relevant to farmer circumstances; yield
response to new practices tends to be
biased upward, and labor use, measured
using laborers hired by researchers and
working on small plots, is
unrepresentative of farmers’ labor use.
Farmers’ assessments are more accurate
in type 3 trials for several reasons.
Because farmers control the
experimental process, they are likely to
have more interest and information on
the practice. Furthermore, because
farmers in type 3 trials usually have less
contact with researchers than farmers in
other types of trials, their views of a
technology are less influenced by
researchers’ views. Finally, whereas it is
often necessary to provide inputs to
farmers in type 2 trials to ensure that
results are comparable across farmers,
no inputs, with the possible exception of
planting material, are provided in type 3
trials. Thus farmers’ views in type 3
trials are more likely to be sincere than in
type 2 trials, where positive assessments
may simply reflect the farmers’ interest
and satisfaction in obtaining free inputs.
For example, in a hedgerow
intercropping trial in western Kenya
(Swinkels and Franzel 1997), 50% of the
farmers claimed that hedges increased
crop yields, whereas technicians noted
yield increases on only 30% of farms—
the technicians claimed that the
difference was due to farmers trying to
please researchers.
Finally, all three types of trials play a
potentially important role in defining the
boundary conditions for the technology,
i.e., the biophysical and socioeconomic
conditions under which the practice is
Steven Franzel and Richard Coe
5
likely to be adopted by farmers. Which
type of trial is best depends on the
objectives and particular circumstances
of the participants (facilitators and
farmers).
Continuum and sequencing of
trial types
The different types of trials are not
strictly defined, rather they are best seen
as points along a continuum. For
example, it is common for a trial to fit
somewhere between type 2 and type 3,
as in the case where farmers agree to test
a specific protocol (type 2), but over
time, individuals modify their
management of the trial (type 3). For
example, in the hedgerow intercropping
trial in western Kenya mentioned above,
farmers planted trials in a similar
manner but most later modified such
variables as the intercrop, pruning
height, and pruning frequency.
The types of trials are not necessarily
undertaken sequentially; researchers and
farmers may decide to begin with a type
3 trial, or to simultaneously conduct two
types of trials. For example, in the case
of upper-storey tree trials in western
Kenya (Franzel et al. 2002b), no type 1 or
type 2 trials were needed because much
was already known about the growth of
the trees in the area. Rather, farmers
planted type 3 trials in order to assess
the performance of the species on their
farms. In Zambia, many farmers planted
type 2 and type 3 improved fallow trials
in the same year (Kwesiga et al. 1999).
They tested a particular set of practices
in their type 2 trials and used type 3
trials either to extend their plantings or
to test a modification of the practice.
Researchers wished to assess biophysical
response in the type 2 trials and to
monitor farmers’ innovations in the type
3 trials. Types 2 and 3 trials often
generate questions or sharpen
hypotheses about biophysical factors,
which can then be best evaluated
through type 1 on-farm or on-station
trials. In western Kenya, several
researcher-managed trials to explore
Table 1. The suitability of types 1, 2, and 3 trials for meeting specific objectives.
Trials
Information types Type 1 Type 2 Type 3
Biophysical response H
ML
Profitability L H L
Acceptability
Feasibility L M H
Farmers assessment of a particular prototype
§
LHM
Farmers assessment of a practice L M H
Other
Identifying farmer innovations 0 L H
Determining boundary conditions H H H
Type 1 = researcher designed, researcher managed; Type 2 = researcher designed, farmer managed; Type 3 = farmer designed,
farmer managed.
H = high, M = medium or variable, L = low, 0 = none. The suitability involves both the appropriateness of the trial for collecting the
information and the ease with which the information can be collected.
§By particular prototype, we mean a practice that is carefully defined. For example, a prototype of improved fallows would include specific
management options such as species, time of planting, spacing, etc.
Participatory On-Farm Technology Testing
6
specific aspects of improved fallow
function and design were set up
following farmer-managed trials
(Swinkels et al. 1997).
Handling complexity
Complexity is determined by the
number and diversity of components
(intercropping trees and crops, as
opposed to trees or crops in pure stand),
the length of the cycle of the technology
(3+ seasons as opposed to single-season
cycles), and the size of the trial (whether
it takes up more than 10% of a farmer’s
cultivated area). In a trial comparing
annual crop varieties, it is often possible
to combine biophysical and
socioeconomic objectives because
according to the above definition, the
trial is not complex. However, most
agroforestry trials are complex and thus
different trial types are needed to meet
the differing objectives.
Promoting farmer innovation
Promoting farmer innovation is an often-
mentioned objective of on-farm trials,
yet little is written on how to achieve it.
Type 2 trials require the standardizing of
practices across farms, thus actually
reducing farmers’ motivation to
innovate. Only in type 3 trials, where
farmers completely control the
experimental process, are farmer
innovations likely to emerge and be
captured. In type 3 trials on improved
tree fallows in eastern Zambia, two of
the main technological components
being extended to farmers emerged from
farmer innovations in type 3 trials
(Kwesiga et al. 1999; Franzel et al. 2002c).
In the first example, farmers were given
potted seedlings, raised at farmer
training centers, for planning improved
fallows on their farms. In order to reduce
the cost of transporting them to the
farms, a farmer removed the seedlings
from the pots and carried them bare-
rooted in basins. When farmers’
plantings of these seedlings proved
successful, researchers conducted type 1
trials to compare the performance of
bare-rooted seedlings grown in raised
seedbeds with potted seedlings. They
found no significant difference in
performance and, as potted seedlings
were much more costly to produce, they
were phased out (Kwesiga et al. 1999).
Farmers’ second main innovation,
intercropping trees with maize during
the year of tree establishment, was also
later tested in on-farm trials. The trials
found that intercropping reduces maize
yields and tree growth during the year
of establishment, but most farmers
prefer it because it economizes on land
and labor use relative to planting in pure
tree stands.
Conclusions
The type 1-2-3 classification system is
useful for highlighting the different
objectives for conducting on-farm trials
and for illustrating the suitability of
different types of trials for particular
types of assessments. It is tempting for
researchers to use the same on-farm trial
to collect information on biophysical
responses and farmer assessment,
however these objectives are often
conflicting. A high degree of control is
needed to collect accurate biophysical
data, whereas farmer assessment is most
valid when individual farmers are
allowed to use the practice in the
manner they see fit. Researchers and
farmers interested in biophysical and
Steven Franzel and Richard Coe
7
socioeconomic data may be better off
conducting type 1 trials for biophysical
data and type 3 trials for socioeconomic
assessment, rather than a single type 2
trial that tries to do both. The more
complex the trial or technology, the less
effective a type 2 approach is likely to be
for both biophysical and socioeconomic
assessments.
References
Ashby, J., T. Gracia, M. Guerrero, C. Quiros, J. Roa, and J. Beltran.
1995.
Institutionalizing farmer participation in adaptive
technology testing with the ‘CIAL’.
Overseas Development
Institute Network Paper 57. London: ODI.
Biggs, S. 1989. Resource-poor farmer participation in research. The
Hague: ISNAR.
Franzel, S., L. Hitimana, and E. Akyeampong. 1995. Farmer
participation in on-station tree species selection for agroforestry:
a case study from Burundi.
Experimental Agriculture
31:27-38.
Franzel, S., R. Coe, P. Cooper, F. Place, and S.J. Scherr. 2002a.
Methods for assessing agroforestry adoption potential. In S.
Franzel and S.J. Scherr (eds.),
Trees on the farm: assessing the
adoption potential of agroforestry in Africa
. Wallingford, U.K.:
CAB International.
Franzel, S., J.K. Ndufa, C.O. Obonyo, T. Bekele, and R. Coe. 2002b.
Farmer-designed agroforestry tree trials: Farmers’ experiences
in Western Kenya. In S. Franzel and S.J. Scherr (eds.),
Trees on
the farm: assessing the adoption potential of agroforestry
practices in Africa
. Wallingford, U.K.: CAB International.
Franzel, S., D. Phiri, and F.R. Kwesiga. 2002c. Assessing the adoption
potential of improved tree fallows in Eastern Zambia. In S.
Franzel and S.J. Scherr (eds.),
Trees on the farm: assessing the
adoption potential of agroforestry practices in Africa
.
Wallingford, U.K.: CAB International.
Gittenger, J.P. 1982.
Economic analysis of agricultural projects
.
Baltimore: Johns Hopkins University Press.
Kwesiga, F.R., S. Franzel, F. Place, D. Phiri, and C.P. Simwanza. 1999.
Sesbania sesban
improved fallows in eastern Zambia: their
inception, development, and farmer enthusiasm.
Agroforestry
Systems
47:49-66.
Lightfoot, C. 1987. Indigenous research and on-farm trials.
Agricultural Administration
24:1-11.
Okali, C., J. Sumberg, and J. Farrington. 1994.
Farmer participatory
research. Rhetoric and reality
. London: Intermediate Technology
Publications.
Shepherd, K., R. Swinkels, and B. Jama. 1994. A question of
management: the pros and cons of farmer- and researcher-
managed trials.
Agroforestry Today
6:4:3-7.
Sperling, L., M.E. Loevinsohn, and B. Ntabomvura. 1993. Rethinking
the farmers’ role in plant breeding: Local bean experts and on-
station selection in Rwanda.
Experimental Agriculture
29(4):509-518.
Stroud, A. 1993.
Conducting on-farm experiments
. Cali, Columbia:
CIAT.
Swinkels, R., and S. Franzel. 1997. Adoption potential of hedgerow
intercropping in the maize-based cropping systems in the
highlands of Western Kenya. Part II: Economic and farmers’
evaluation.
Experimental Agriculture
33:211-223.
Swinkels, R., S. Franzel, K. Shepherd, E. Ohlsson, and J. Ndufa.
1997. The economics of short rotation improved fallows:
evidence from areas of high population density in western
Kenya.
Agricultural Systems
55:99-121.
van Veldhuizen, L., A. Waters-Bayer, and H. de Zeeuw. 1997.
Developing technologies with farmers: A trainer’s guide for
participatory learning
. London: Zed Books.
Participatory On-Farm Technology Testing
8
Discussion Summary
The discussion focussed on the need to make the type 1-2-3 trial classification relevant in
terms of participatory plant breeding. A key question was why do we need participatory
on-farm trials in breeding? One answer was that participatory on-farm trials can generate
information from many different environments at a lower cost. This was a recurrent topic
throughout the workshop. The data generated from certain types of participatory trials
(particularly type 1) can be useful for variety release committees, but this may require
convincing and educating these committees on the usefulness and validity of data
generated through this type of trial. Participatory trials can be a tool to gauge farmers’
acceptability of different varieties and to provide data that cannot be generated through
more conventional methods, such as performance under farmer management. These trials
also can help to identify varieties that are appropriate for different niches. Moreover, they
can be particularly useful for generating testable hypotheses.
Further discussion centered on the value and limitations of different types of trials,
particularly type 3. It was argued that type 1 trials may be good for release proposals and
type 3 for biophysical performance. While some felt that type 3 trials could not provide
good biophysical data, it was pointed out that tapping into farmers’ knowledge and their
ability to recognize and characterize the niches where they farm could overcome the
problems of characterization. In fact the need for proper characterization emerged as one
of the most important topics for participatory trials. Characterization is important for
controlling variability and provides the covariates that would allow for better interpretation
of data generated in these trials.
Another important issue discussed was the need for and the limitation of measuring yield
in types 2 and 3 trials. It is rare to obtain good quality yield measurements under the
conditions prevalent in these types of trials. A question raised was whether direct yield
measurements can be substituted with rating or ranking data from farmers involved in the
trial. This topic generated much interest during the workshop. There is also a need for
simple evaluation and measurement protocols that can be handled by farmers. Simpler
protocols can enhance farmer participation.
A useful approach to stimulate type 3 experiments is to give farmers seed of particular
varieties and then monitor the results. This could provide important information on
acceptability and potential adoption. However, monitoring should be carried out for more
than two years, since it may not be until the third year that farmers feel more confident
and begin to appreciate the varieties.
Steven Franzel and Richard Coe
9
Quantifying Farmer Evaluation
of Technologies: The Mother and
Baby Trial Design
S
IEGLINDE
S
NAPP
Abstract
This paper presents five years of experience in Malawi, experimenting with a novel mother and baby
trial design to systematically connect assessment of technologies by farmers with biological
performance. This design consists of two types of trials. The “mother” trial is replicated within-site to
test a range of technologies and research hypotheses under researcher management. This trial is either
located on a research station or on-farm, e.g., at a central location in the village. The “baby” trial
comprises a number of satellite trials (each trial is one replicate) of large plots under farmer
management and farm resources. Each trial compares one to four technologies (usually a subset of
those tested in the mother trial chosen by the farmer or researcher) with farmers’ technologies/cropping
systems. Researchers indicate the recommended management for each technology, then monitor actual
farmer practice, and document farmer perceptions and ranking. Researchers test complex questions
(e.g., variety response to inputs) at the central mother trial, while farmers gain experience with the
subset of technologies. Farmer perceptions are systematically monitored together with biological
performance of the technologies. Farmer participation in the design of baby trials can vary from limited
to high, depending on the research objectives. This linked trial process provides quantitative feedback to
researchers for improving the design of future technologies. In this study, different analytical tools were
used in conjunction with the trial design. Biological performance data included “adaptability
analysis”, which consists of regressing yield or other data against an environmental index (average
yield, soil factor, or others), analysis of variance and marginal rate of returns economic analysis, and
evaluating benefits and risk aversion. Survey data included the use of descriptive statistics for different
farmer group answers, analysis of the ranking of technologies by farmers, grouping answers from open-
ended questions, and expressing each answer as a percentage of all answers.
Introduction
There has been limited adoption of
improved seed and farming technologies
by smallholder farmers in many regions
of the world. According to the
participatory research literature, one of
the major barriers to uptake has been
insufficient attention to understanding
farmer priorities and perceptions
(Chambers et al. 1989; Ashby and
Sperling 1995). Researchers and
extension staff are frequently aware that
farmers need to be consulted and that
10
indigenous knowledge should be
documented, however the time and
resources required for participatory
research are seen as onerous (Snapp et
al. 2002a). Rigorous and practical tools
are urgently required to improve the
process of participatory variety selection
and technology development (Bellon
1998). This workshop was convened to
address the need for quantitative
methodology and statistical approaches
to document farmer criteria and
perceptions.
In this paper we discuss five years of
experience in Malawi, experimenting
with a novel mother and baby trial
design to systematically connect farmer
assessment of technologies with
biological performance (Snapp 1999).
Methodical cross-checking of
performance evaluation by researchers
and farmers provides complementary
rather than competing information from
conventional research and participatory
processes (van Eeuwijk et al. 2001). We
investigated the biological performance
of intensified legume use within a
maize-based system, and invented the
mother and baby trial concept to test the
potential for widespread adoption of
these technologies by smallholder
farmers in southern Africa (Snapp et al.
2002b).
The lessons regarding on-farm trial
design and documenting farmer
perceptions appear to have wider
application than Malawi—the mother
and baby trial design is meeting
acceptance by many researchers in the
region. Scientists from the International
Maize and Wheat Improvement Center
(CIMMYT) have recently adapted the
trial design, using an incomplete lattice
design for baby trials, to conduct
hundreds of linked mother and baby
trials in southern and eastern Africa
(Bänziger and Diallo 2001). A survey of
30 participatory research scientists
conducted in 2001 found that 11 were
using the mother and baby trial design
or were in the process of adopting it,
which frequently included adapting it to
local circumstances (Morrone and Snapp
2001). The primary reason cited for
interest in the approach was the ability
to systematically involve many farmers
and to rapidly elicit evaluation of
technologies and varieties.
On-Farm Trial
Methodology
It is now over 20 years since the farming-
system approach was initiated in
southern Africa, and now research is
primarily conducted on-farm (Heisey
and Waddington 1993). Methods to
document the biological performance
and yield potential of varieties and
technologies are widely known. For
example, it is highly recommended that
on-farm trials be conducted at
representative, well characterized sites,
so that results can be extrapolated to
recommendation domains. In some cases
researchers use trial designs on-farm
similar to those conducted at research
stations, with four or five replicated
plots per treatment and a randomized
complete block or similar design.
Generally farmers are treated in a
contractual manner, and this trial design
can be an effective means for evaluating
technology performance under edaphic
conditions typical of a farming
community.
Sieglinde Snapp
11
Another widely used approach is to
conduct a large number of on-farm trials
to evaluate technology performance
across a spectrum of environments
(Fielding and Riley 1998; Mutsaers et al.
1997). This takes into account the
variability of the heterogeneous
environment that characterizes many
smallholder regions. A trial design
where each site acts as a replicate is one
approach that allows many
environments to be sampled (Mutsaers
et al. 1997). Adaptability analysis and
related statistical tools can use data from
the many sites to evaluate technology
performance across different
environments. This may make it
possible to detect which varieties
perform best in a weedy environment or
on acid soils, for example (Hildebrand
and Russell 1996). Another recently
developed tool for multi-environment
trial data is multiplicative mixed
models, which can be used to model
genetic variances and covariances.
These statistical approaches are
illustrated by van Eeuwijk and
colleagues (2001) for participatory
breeding and variety selection in barley.
Despite the extensive on-farm
experience of many research programs,
there is still widespread inability to
understand or take account of farmers’
priorities. Farmers’ production priorities
are often assumed to focus on
maximizing yields or financial returns,
while in reality they may be concentrate
on gaining the best return from a very
small cash investment, or on
maximizing food security (Snapp et al.
2002a). Tools to evaluate potential
profitability of technologies from trial
data are documented, such as partial
budgeting to estimate economic returns
(CIMMYT 1988).
In contrast to economic budgets, there is
limited documentation of methodology
that systematically involves farmers in
technology evaluation. There are a few
outstanding examples, however, such as
the use of expert farmer panels to
document farmer criteria and improve
variety selection in West and East Africa
(Sperling et al. 1993; Kitch et al. 1998).
Other methods are described in
newsletters, working papers, and other
publications that are important, but can
be difficult to access (Bellon 1998;
Kamangira 1997). Here we describe an
approach that facilitates and documents
the hands on experience of farmers. This
provides a relatively rapid and rigorous
approach to systematically involving
farmers in the development of best bet
technologies or varieties. Researchers
assess input from farmers through
surveys, farmer ranking of technologies,
and by monitoring farmer adaptations
and spontaneous experimentation
(Snapp et al. 2002a). Through the mother
and baby trial design we catalyze and
improve on the ongoing
experimentation by farmers through a
systematic process.
Mother and Baby Trial
Case Study
The sites
Four agroecosystems for participatory
research were chosen in Central and
southern Malawi, where about 70% of
the country’s smallholder agriculture is
practiced. The agroecosystems, with the
study sites in parentheses, are:
1. Central Malawi: subhumid, mid
altitude plain (Chisepo, Mitundu, and
Mpingu)
Quantifying Farmer Evaluation of Technologies
12
2. Central Malawi: subhumid, high
altitude hills (Bembeke)
3. Malawi lakeshore: semi-arid zone
(Chitala and Mangochi)
4. Southern Malawi: subhumid, mid
altitude plateau (Songani)
Mother and baby trial design
The “mother and baby” trial was named
by one of the farmers involved in the
trials. The “mother” trials test many
different technologies, while the “baby”
trials test a subset of three or fewer
technologies, plus one control (Snapp
1999). The design makes it possible to
collect quantitative data from mother
trials managed by researchers, and to
systematically crosscheck them with
baby trials on a similar theme that are
managed by farmers (Figure 1). The
design is flexible: the mother trials
described here were located on-farm at
central locations in villages, but they can
be located at nearby research stations
(Snapp 1999). The level of farmer
participation in baby trial design and
implementation can vary from
consultative to collaborative. We discuss
here a consultative process where
researchers lead the implementation of
baby trials, however the role of farmer
participation in baby trials can be much
higher. For example, at the Bembeke site,
the nongovernmental organization
(NGO) Concern Universal has catalyzed
greater farmer involvement, including in
baby trial design (Figure 2).
This study started in 1996, when soil
scientists and agronomists from the
University of Malawi and the Malawian
Department of Agriculture and
Irrigation met to synthesize published
information and results from years of
on-farm research (Figure 3a).
Reconnaissance surveys and village
meetings helped to form the hypotheses
that smallholder farmers have limited
resources, use small amounts of mineral
fertilizer, and experiment with
alternative nutrient sources such as
legume residues (Kanyama-Phiri et al.
2000). Researchers designed best bet
technologies to improve soil
productivity that required minimal cash
Figure 1. Mother and baby trial design layout. A
mother trial is centrally located in a village or at a
nearby research station and replicated on-site. Baby
trials are located in farmers’ fields and compare a
subset of technologies or varieties from the mother
trial. Each baby trial site is a replicate.
Researcher-managed
mother trial: a replicated
design to evaluate many
treatments and controls
(often > 30 plots)
Farmer-
managed
baby trial
(~ 4 plots)
Figure 2. Different levels of farmer and researcher
participation in the design and implementation of
baby trials.
Farmer-led: These trials often involve input from
nongovernmental organizations or other farm
advisors. Plots are large and informally laid out.
Simple paired comparisons of a new option with
current farmer practice are often made.
Researcher-led: Generally researchers
choose four or more best-bet technology
options for comparison. These are a
subset of all of the options from the
mother trial. Farmers
manage the trial and
researchers
monitor
farmers’
practice.
Cooperative effort: Farmers choose among the
best bet options presented by researchers and
extension workers. A comparison is conducted
between these options and farmer-designed
controls – the farmers’ best bet options. Plots are
laid out by farmers with input from researchers.
Sieglinde Snapp
13
and labor. Representative villages in key
agroecosystems were chosen on the
basisof information from community
meetings, consultations with extension
staff, and government statistics on
population density and agroclimatic
data (Snapp et al. 2002a). The selected
villages had to be representative of four
major agroecozones and in terms of
population density and access to
markets.
The researchers involved in the mother
and baby trials selected the “test”
farmers in collaboration with
community members at a meeting. They
asked for volunteers and stressed the
need to include both well-off farmers
and those with few resources, as well as
households headed by women. The trial
design was geared to meet both farmers’
and researchers’ objectives, which by no
means are identical. Relatively simple
“one-farmer, one-replica” trials were
managed by farmers as satellites or baby
trials to a central mother trial, which was
managed by researchers and had within-
site replications (Figure 1). A trial design
with a maximum of four plots and no
replication within the farmer’s field fits a
limited field size, simplifies the design,
and makes it easier for farmers to
evaluate technologies.
Many replicates across sites make it
possible to sample wider variations in
farm management and environment
(Fielding and Riley 1998; Mutsaers et al.
1997). However, replication within a site
and intensive, uniform management
improve research on biological
processes. The mother and baby trial
design is the first attempt we are aware
of that methodically links “replicated
within a site” researcher-led trials with
“one site, one replica” farmer-led trials
(Figure 1). Van Eeuwijk and colleagues
(2001) advocate using both types of
trials, but do not explore the deliberate,
simultaneous use of the trials in a design
that systematically links the two.
Technology evaluation in the
mother-baby trial approach
Farmers initially chose their test
technologies on the basis of information
given in introductory community
meetings (Figure 3a). Descriptions of
promising technology options were
presented, and visits to research station
trials were arranged where possible.
Researchers and assistants provided
supervision and interaction through
monthly visits to sites. Enumerators
were based at each site to assist with
trial setup and measurements, in
collaboration with local extension or
NGO staff and farmers (Figure 3b).
Training in participatory approaches and
survey techniques to reduce bias was
conducted at annual project meetings.
Plot size for mother and baby trials was
approximately 8 m by 8 m. Ridges were
prepared by hoe and placed about 0.9 m
apart, following conventional practice. A
wide range of cropping system
technologies was compared to current
farmer practice, as described in Snapp et
al. (2002a). The mother trials were
planted by extension staff with assistance
from enumerators within 10 days of the
arrival of the rainy season. It was
interesting to note that farmers were
very timely in planting their baby
trials—in many cases they were planted
before the mother trials.
Data collected from trials included: plot
size measurements, planting date,
emergence date, population density at
emergence, early weed cover, dates
Quantifying Farmer Evaluation of Technologies
14
when plot was weeded (plots were
weeded twice, approximately 5 and 10
weeks after planting), aboveground
biomass of a subsample of legumes
measured at flowering, harvest plant
population, and grain yield at harvest.
Fresh weight measurements were
conducted in the field, and subsamples
of 5-15 kg were collected to determine
grain moisture content and dry weight
to fresh weight conversions. Soil samples
from the topsoil were collected at all
sites, and soil pH, organic carbon,
inorganic nitrogen, and texture analyses
were conducted (Snapp et al. 2002a).
The farmers provided quantitative
feedback on their evaluation of
technologies to researchers through
surveys, paired matrix ranking, and by
rating technologies. Examples of the
type of short survey and rating exercises
used are presented by Bellon (1998).
Qualitative feedback was obtained from
meetings between farmers and
researchers and comments recorded at
field days. The mother trials were
evaluated more informally during
discussions held at field days. This made
it possible to integrate the farmers’
assessment and improve research
priority setting (Figure 3c). Meetings
were also held with senior stakeholders,
conducted as part of an iterative process
to maintain support and inform priority
setting at every level. This included
policymakers, supervisors of extension
and NGO staff, senior researchers, and
industry representatives (Figure 3c).
Statistical analysis
Adaptability analysis was used for an
initial review of all the data combined
from mother and baby trials (Hildebrand
and Russell 1996). This regression
Figure 3. The sequence of steps in designing and
implementing mother and baby trial methodology.
Approximate time allocation for activities in a) year
one, b) year two, and c) year three of the mother and
baby trial approach.
a) Year 1 mother/baby trials
4. Meet with farming
communities 1. Conduct literature review
2. Choose
representative sites
3. Meet with senior
stakeholders
b) Year 2 mother/baby trials
1. Communities and researchers
choose technologies and farmers
2. Hire local
enumerators
3. Conduct
baseline survey
of farmers
4. Monitor soils
5. Initiate trials and
evaluate with farmers
c) Year 3 mother/baby trials
1. Conduct 2
nd
-year trials
and survey farmers
2. Analyze results
3. Meet with
farming
communities
4. Meet with senior
stakeholders
Sieglinde Snapp
15
approach allows performance of
technologies to be compared across a
range of environments, where average
yield or edaphic factors are used as an
environmental index. Yield potential of
varieties under stressed conditions can
be reviewed through adaptability
analysis, providing insight into the risk
associated with different technologies.
A more rigorous approach is provided
by mixed models, such as factor-
analytic models for modeling variance
and covariance from multi-environment
trial data (van Eeuwijk 2001). An
incomplete lattice design for the baby
trials allows CIMMYT scientists to
systematically evaluate new stress-
tolerant varieties of maize (Bänziger
and Diallo 2001).
Our statistical analyses relied on the
analyses of variance module of a
statistical package (Statsoft 1995). The
response of maize yield gain in year
two of mother trials was evaluated
through a two-way analysis of variance
conducted for technology and location.
Where technology effects were
significant in the analysis of variance, a
planned non-orthogonal comparison
was used to evaluate mean technology
effects compared to the control
(continuous maize without nutrient
inputs). A separate analysis of variance
was conducted for baby trials, where a
one-way analysis of variance was
conducted to evaluate the effect of
technologies. Descriptive statistics were
conducted for farmer rating data, and
means compared using paired t-tests
(Taplin 1997).
Economic analysis
Economic analysis of net benefits was
conducted over two years. This allowed
comparison of best bet technologies that
involved intercrop systems and rotation
treatments requiring a two year
evaluation period. The difference was
computed between the value of maize
and legume grain yields (total price
benefits) accruing from fertilizer and
legume seed inputs and costs
(CIMMYT 1988).
Conclusion
By facilitating hands on experience for
farmers, the mother and baby trials
provide a relatively rapid approach to
developing improved varieties and soil
management technologies. In contrast to
some approaches which merge
objectives, such as research validation of
technologies and farmer
experimentation, the goal of the mother
and baby trial approach is to facilitate
communication across different
approaches to experimentation and
information flow among the partners.
The linked trial design provides
researchers with tools for quantifying
feedback from farmers. Farmer input
generated new insights, such as the need
to broaden the research focus beyond
soil fertility or variety selection to
include system-wide benefits such as
weed suppression. Some Malawi
extension staff and researchers have
expressed reservations about the time
requirements for participatory
approaches; however, the success of the
approach is reflected in the uptake of the
mother and baby trial design by
researchers in ten neighboring countries.
Quantifying Farmer Evaluation of Technologies
16
References
Ashby, J.A., and L. Sperling. 1995. Institutionalizing participatory,
client-driven research and technology development in
agriculture.
Development and Change
26:753-770.
Bänziger, M., and A.O. Diallo. 2001. Stress-tolerant maize for
farmers in sub-Saharan Africa. In
Maize Research Highlights
1999-2000, CIMMYT
. www.cimmyt.org/Research/Maize/.
Bellon, M.R. 1998.
Farmer participatory methods for soil fertility
research
. Soil FertNet Methods Working Paper No. 4. Harare,
Zimbabwe: CIMMYT. 25 pp.
Chambers, R., A. Pacey, and L.A. Thrupp. 1989.
Farmers first:
farmer innovation and agricultural research
. London, UK:
Intermediate Technology Publications.
CIMMYT (International Maize and Wheat Improvement Center).
1988.
From agronomic data to farmer recommendations: an
economic training manual
. Completely revised edition. Mexico,
D.F.: CIMMYT. Pp. 78.
Fielding, W.J., and J. Riley. 1998. Aspects of design of on-farm
fertilizer trials.
Experimental Agriculture
34:219-230.
Heisey, P., and S. Waddington (eds.). 1993.
Impacts of on-farm
research: Proceedings of a networkshop on the impacts of on-
farm research in eastern and southern Africa
. CIMMYT Network
Report No. 24. Harare, Zimbabwe: CIMMYT. 475 pp.
Hildebrand, P.E., and J.T. Russell. 1996.
Adaptability analysis: A method
for the design, analysis and interpretation of on-farm research and
extension.
Ames, I.A.: Iowa State University. 189 pp.
Kamangira, J.B. 1997. Assessment of soil fertility status using
conventional and participatory methods. MSc thesis,
Department of Crop Science, Bunda College of Agriculture,
University of Malawi. 107 pp.
Kanyama-Phiri, G., S. Snapp, B. Kamanga, and K. Wellard. 2000.
Towards integrated soil fertility management in Malawi:
Incorporating participatory approaches in agricultural research
.
Managing Africa’s Soils, Working Paper No. 11. London, U.K.:
IIED. 28 pp.
Kitch, L.W., O. Boukar, C. Endondo, and L.L. Murdock. 1998. Farmer
acceptability criteria in breeding cowpea.
Experimental
Agriculture
34: 475-486.
Morrone, V.L., and S.S. Snapp. 2001. Uptake of a new on-farm trial
design that includes the small-holder farmer.
HortScience
36:477 (abstract).
Mutsaers, H.J.W., G.K. Weber, P. Walker, and N.M. Fisher. 1997.
A
field guide for on-farm experimentation
. Ibadan, Nigeria: IITA/
CTA/ISNAR. 235 pp.
Snapp, S.S. 1999. Mother and baby trials: A novel trial design being
tried out in Malawi. Target Newsletter of the Southern Africa Soil
Fertility Network Vol. 17, pp. 8.
Snapp, S.S., G. Kanyama-Phiri, B. Kamanga, R. Gilbert, and K.
Wellard. 2002a. Farmer and researcher partnerships in Malawi:
Developing soil fertility technologies for the near-term and far-
term.
Experimental Agriculture
(in press).
Snapp, S.S., D.D. Rohrbach, F. Simtowe, and H.A. Freeman. 2002b.
Sustainable soil management options for Malawi: Can
smallholder farmers grow more legumes.
Agriculture,
Ecosystems and Environment
(in press).
Sperling, L., M.E. Loevinsohn, and B. Ntabomvura. 1993. Rethinking
the farmer’s role in plant breeding: Local bean experts and on-
station selection in Rwanda.
Experimental Agriculture
29:509-
519.
Statsoft. 1995. Statistica for Windows. 1995. Tulsa, Oklahoma.
Taplin, R.H. 1997. The statistical analysis of preference data.
Applied
Statistics
46:493-512.
van Eeuwijk, F.A., M. Cooper, I.H. DeLacy, S. Ceccarelli, and S.
Grando. 2001. Some vocabulary and grammar for the analysis
of multi-environment trials, as applied to the analysis of FPB
and PPB trials.
Euphytica
122:477-490.
Sieglinde Snapp
17
Discussion Summary
The discussion following the presentation dealt with the use of the mother-baby trial system in
the context of participatory plant breeding, and was divided into two themes: (1) the technical
advantages and disadvantages of mother-baby trials for selection and breeding, and (2) the
role of mother-baby trials in formal research systems.
Discussing the first theme, it was pointed out that one of the main advantages of this system,
particularly of the baby trials, is the number of trials that can be evaluated. Selection intensity
relates directly to genetic advance, and the objective is to obtain high precision ranking. The
choice is to evaluate a small number of plots more intensely or a larger number less intensely. It
was pointed out that the use of incomplete multilocation trials may sacrifice some precision, but
this is circumvented by having access to the appropriate environments. Furthermore, many
environments can be sampled at a lower cost, although it was pointed out that in India, on-
farm trials were more precise than station trials, and heritabilities can be higher. Farmers’
knowledge of their fields and their heterogeneity can be used to design the baby trial to
increase heritabilities. An issue that reappeared throughout the workshop was the
appropriateness of having many unreplicated trials in many different locations versus having
fewer replicated trials and therefore fewer locations. There appeared to be general agreement
that the former option may be better because it may generate more useful information at a
lower cost. Replication within a site may yield less information than sampling numerous sites,
particularly from a cost-effective viewpoint.
The other point discussed was the use of mother-baby trials in the formal research system in
relation to national agricultural research systems (NARS) and variety releases. A challenge is to
get NARS to assess the value of these new tools and to incorporate this type of trial system,
particularly in conjunction with participatory varietal selection (PVS), especially with PVS for
variety release systems. Involving national programs in PVS may be the most straightforward
way to link PVS with regulatory systems; PVS by the Centers of the Consultative Group on
International Agricultural Research should not be done in isolation. The linking of PVS,
innovative trial systems, and regulatory agencies is already underway in Nepal and Kenya. It
was pointed out that regulatory agencies are more closely linked to formal seed systems than
informal farmer-based systems in which PVS may take place. Regulatory committees may
disfavor systems that are perceived as threatening, and hence lobbying is necessary to make
the system more active. However, it is necessary that these committees do not perceive these
new approaches as substitutes, but more as cost-effective complements to their work. There
may be some resistance.
Quantifying Farmer Evaluation of Technologies
18
Analyzing Data from
Participatory On-Farm Trials
R
ICHARD
C
OE
Introduction
Data from on-farm trials take many
forms, from crop yields measured on
individual plots to the reported
consensus of participants at a group
meeting. Any set of data comprising
multiple observations that are not all
identical will require some sort of
statistical analysis to summarize the
common patterns. Choice of appropriate
analysis methods depends on:
1. The objectives of the analysis
2. The design (who compared what
treatments or varieties under which
conditions)
3. The type of measurements taken
In the second section of this paper I
discuss different styles and objectives of
analysis. A formal approach, similar to
that commonly conducted, for example,
on crop yields measured in a classical
variety trial using analysis of variance
and reporting variety means, has a role
in the analysis of some participatory
trials. The irregularity of designs often
means that the well known methods
may be inappropriate. In the fourth
section I show how some extensions to
the usual methods can be used. Many
researchers report that results from on-
farm trials are highly variable. The fifth
section shows how some of this
variation may be interpreted to gain
further insight, particularly into
differing responses in different
situations, or genotype by environment
interaction (GEI). Examples used to
illustrate the methods are introduced in
the third section. The methods described
in this paper are appropriate for
responses measured on a continuous
scale, such as crop yields. The analysis of
responses recorded as scores or ranks is
the subject of a companion paper (see
Coe 2, this proceedings).
Abstract
Researchers conducting participatory on-farm trials, particularly variety selection
trials, often have difficulty analyzing the resulting data. The irregularity of trial
designs means that some of the standard tools based on analysis of variance are not
appropriate. In this paper some simple extensions to analysis of variance, using
general linear models and linear mixed models, are shown to facilitate insightful
analysis of these awkward designs.
19
The methods presented in this paper are
neither new nor described in depth.
Technical descriptions can be found in
numerous publications including
Kempton and Fox (1997) and
Hildebrand and Russell (1998).
Approaches to Analysis
An assumption of this paper is that
participation and the systematic
collection, analysis, and interpretation of
data are not contradictory activities.
Among some practitioners there is a
belief that adoption of a participatory
paradigm removes the need, or even
makes it impossible, for researchers to
collect and analyze data. The purpose of
participation is seen as empowerment of
local people, which is inconsistent with
researchers conducting activities that
meet their own objectives. However,
many researchers recognize that broad
conclusions of relevance beyond the
immediate participants are still
necessary, and that a part of this research
must be the collection and interpretation
of data. Coe and Franzel (2000)
summarize the research design
principles that must still be followed if
the research is to lead to valid inferences.
A participatory approach does, however,
have implications for the collection,
analysis, and presentation of data. Data
collection is discussed in another section
of this paper. Data analysis can be for,
and to some extent by, different
participants, each of whom will have
their own interests and objectives. In the
case of participatory crop breeding trials,
participants may include farmers,
researchers, extension staff, and regional
planners. While a farmer is interested in
making decisions about varieties to
select for his/her farm, a regional
planner might be interested in average
performances, and a researcher in
reasons for heterogeneous responses.
Each will require a different type of
analysis. As researchers are often also
the facilitators of the whole process, it is
their responsibility to ensure that each
participant has the data they need in a
useful format.
It is particularly important for a
researcher to make data and results
available to farmers. There are at least
three reasons for this:
1. Farmers are supposed to be
beneficiaries of the activities and can
only benefit if information is given back
to them.
2. Giving farmers results is a courtesy as
they have made the research possible
though their involvement.
3. Farmers can provide considerable
insight into the analysis and results. It
is very common to hear the complaint
that data from on-farm trials are very
variable. This variation is a reality, and
understanding its causes should be an
objective of the research. Such an
understanding will eventually lead to
improved farmer decision making.
Farmers understand some of the
reasons for the variation, and their
insights can often provide a framework
or hypotheses for analysis.
When plant breeders conducted
classical, on-station experiments, the
analysis performed often followed a
standard pattern, for example, analysis
of variance followed by tabulation of
means and application of “means
separation procedures”. Often little
attention was paid to exploratory
Analyzing Data from Participatory On-Farm Trials
20
analysis, designed to detect the main
patterns and surprising observations.
Nor was much effort made at
imaginative presentation of results—
researchers knew how to read the tables
and they were the intended audience.
When participatory approaches gained
popularity, analysts made attempts to
find interesting and informative
presentations of data, but tended to
forget about formal analysis, and, hence,
sometimes reached invalid conclusions.
Of course both approaches to analysis
are needed; they reinforce each other.
Graphical and exploratory methods
show the important results and reveal
odd observations and unexpected
patterns. Formal methods allow
measures of precision to be attached to
results and allow extraction of estimates
from complex data structures. We cannot
say that either of the approaches is
better—both are needed to satisfy
different roles. In this document I have
concentrated on formal analysis, as
requested. It is easier to find general
methods and approaches of this type of
analysis that can be described and
applied in many situations.
Presentation and analysis are not the
same. The method of presenting results
depends on the nature of the result, the
story they are to tell, and the audience. I
am not aware of any work that shows
that literate farmers find it easier to
interpret graphs than numerical
information; indeed, it seems likely that
a simple numerical table may be more
familiar than a quantitative graph.
The steps in analysis of any data set can
be summarized as:
1. Define the analysis objectives. These
drive the rest of the analysis. It is
impossible to carry out a good analysis
without clear objectives. Often the key
graphs and tables can be defined at this
stage, even without the results with
which to fill them in.
2. Prepare the data. Data sets will have to
be entered and checked, suitable
transformations made (e.g., to dry
weight per unit area), relevant
information from different sources (e.g.,
farm household data and plot level
yields) extracted to the same file, and so
on.
3. Exploratory and descriptive analysis.
The aim is to summarize the main
patterns and notice further patterns
that may be relevant.
4. Formal statistical analysis. The aim is to
add measures of precision and provide
estimates from complex situations.
5. Interpretation and presentation.
Iteration between the steps will be
necessary. Training materials by Coe et
al. (2001) provide more information on
analysis of experiments.
A spreadsheet package such as Excel is
good for much of the descriptive
analysis. Its flexible facilities for data
selection and transformation, tabulation,
and graphics are useful. However,
dedicated statistical software is needed
for the analyses described here—they
cannot be done in Excel. There are
several packages with almost equivalent
facilities. All examples given in this
paper use Genstat (2000)—I often find it
most convenient and easiest to
understand, particularly as methods for
different problems can be addressed
with a similar set of commands. The key
Richard Coe
21
commands used to produce each
analysis are included in the text with
their output. SPSS is widely used by
social scientists but is not particularly
useful for the analyses described here.
Examples to Illustrate
Analysis Methods
1. Soil fertility under
agroforestry in Malawi
This is not a breeding trial but is
included because the design is typical of
many participatory on-farm trials. Three
soil fertility strategies are compared over
a number of years:
gMixed intercropping of maize and
gliricidia
s Relay planting of maize and sesbania
cThe control of continuous maize
Forty-one farmers each compared the
control with one or both of the other
treatments. Crop yield is the response of
interest. A number of covariates were
measured at the plot or farm level to
help understand the reasons for
variation across farms.
2. Maize varieties in Zimbabwe
This was a “baby” trial.1 Twelve maize
varieties were compared. A total of 146
farmers in 25 different sites took part,
each testing 4 of the 12 varieties. The
varieties tested were chosen by the
researcher. Some household and field
covariates were recorded. The actual
crop yields obtained were not available
for analysis, so the examples here use
simulated yield data but the original
field design.
Average Treatment
Effects
Example 1
The starting point for the analysis
should be simple explorations, such as
the table of means below (created in
Excel) that gives the mean yield for each
treatment in the 1998 season, together
with the number of observations.
Data
trt! Average Count of
of yield98 yield98
c1.73 31
g2.47 39
s2.50 24
Grand total 2.23 94
The formal analysis has two general
aims:
1. To improve the estimates. In this case
we know that all treatments do not
occur on each farm, so some
adjustment for farm effects may be
needed (see Example 2).
2. To provide measures of precision, i.e.,
standard errors and confidence
intervals.
This is the role of analysis of variance
and associated procedures in “regular”
designs. The exact same ideas can be
used here.
Genstat commands to complete the
analysis are:
model yield98
fit [p=a;fprob=y] name+trt
predict trt
1The mother-baby trial design comprises a central researcher-managed “mother” trial, which tests all varieties, and farmer-managed
“baby” trials, which test a subset of the varieties from the mother trial.
Analyzing Data from Participatory On-Farm Trials
22
***** Regression Analysis *****
*** Accumulated analysis of variance ***
Change d.f. s.s. m.s. v.r. F pr.
+ name 38 168.6518 4.4382 13.39 <.001
+ trt 2 15.9187 7.9594 24.01 <.001
Residual 53 17.5691 0.3315
Total 93 202.1396 2.1735
Response variate: yield98
trt Prediction S.e.
c1.6386 0.1066
g2.6235 0.0952
s2.3677 0.1240
Standard errors of differences (sed) can
also be found. They are:
sed
g-c 0.145
s-c 0.166
g-s 0.160
While this analysis is correct and
technically efficient, it is possibly a little
opaque! An alternative that is more
easily understood is described as
follows.
The researcher is interested in the
comparison of treatments and in the
change in performance (e.g., yield)
realizable by changing from one
treatment to another. Farmers are also
interested in this comparison, though
the criteria for comparison may be
different. Experiments are designed to
assess this change. It is therefore natural
to approach analysis of the data by
focusing on these changes. The steps are:
1. Choose a treatment pair, the
comparison of which is of interest, e.g.,
g (maize intercropped with gliricidia)
and c (monocropped maize).
2. For each farm on which this pair
occurs, calculate the difference in
response g-c.
3. Summarize this set of differences.
In this trial, 31 farms have yield data for
the pair of treatments in 1998. The
column of differences is y98g_c.
Summary statistics for y98g_c
Number of observations = 31
Number of missing values = 10
Mean = 1.008
Median = 0.841
Minimum = -0.739
Maximum = 2.712
Lower quartile = 0.400
Upper quartile = 1.766
Variance = 0.791
Standard deviation = 0.889
The mean difference of 1.008 has a
standard error of (0.791/31) = 0.16. A
95% confidence interval for the mean
difference is thus 1.01 ± 2 x 0.16 = (0.69,
1.33). A statistical test of the hypothesis
of no difference in mean yield from the
two treatments would use the t statistic t
= difference/se(difference) = 1.01/0.16 =
6.3. This mean, together with its
standard error, is almost identical to that
produced by the modeling analysis
above. Differences are due to:
1. The modeling analysis uses part of the
information from three farmers with
sesbania and gliricidia but not the
control treatment. [If we can estimate g-s
and s-c within farms then we also
estimate g-c = (g-s)-(s-c)].
2. All the data is used to estimate the
residual variance, not just part of it.
The summary statistics above emphasize
that observing the mean difference is
only the beginning of the analysis. There
is considerable variation in the
difference across different farms that
Richard Coe
23
needs understanding and interpreting.
This is the subject of the fifth section of
this paper.
Example 2
The first step must be to check the data
for errors and oddities. This is not
illustrated. Next, simple summaries—
numerical and graphical—are needed.
The following table gives the mean, 25%,
50%, and 75% points for each entry,
together with the number of plots from
which it was calculated. Note: Excel is
very good for this type of tabulation but
cannot give the % points.
tabulate [class= ENTRY;p=nobs,means,quant;
percent=!(25,50,75)] data=simyield
ENTRY Nobservd Mean _25.0% Median _75.0%
1501.276 0.679 1.238 1.699
2473.077 2.344 2.909 3.639
3472.713 2.076 2.699 3.521
4503.305 2.416 3.473 4.083
5491.323 0.624 1.138 2.124
6493.371 2.594 3.195 3.792
7501.760 0.973 1.742 2.499
8492.429 1.573 2.362 3.143
9421.436 0.659 1.584 2.202
10 51 3.448 2.708 3.401 4.380
11 50 3.099 2.494 3.165 3.761
12 50 1.597 0.677 1.788 2.206
Similar information is presented
graphically in a boxplot:
This particular boxplot has
highlighted some outlying
observations that should be checked
for possible errors.
These overall summaries are unlikely
to be of interest to farmers in any one
location, but the data from their
neighborhood should be very
relevant. A simple table of farm by
entry for each site may be a useful
discussion tool for this group of eight
farmers, as it highlights both the
variation between entries and
variation between farmers testing the
same things. It is likely that farmers
can provide insight into reasons for
the variation, which may help to
direct formal analysis. For example, if
farmers identify that some of the low
yields come from plots known to be
infertile, some measures of fertility
should be built into the formal
analysis. Farmers may also be able to
tell you something about the tradeoffs
between different assessment criteria,
for example, expressing satisfaction
with a variety that is not the highest
yielding, but has some other desirable
property. The data may need
converting to units that farmers can
use and understand.
1
2
3
4
5
6
7
8
9
10
11
12
0123456
simyield
ENTRY
x 185
x 11
x 175
x 0.376
Analyzing Data from Participatory On-Farm Trials
24
SITE! 1
Average of
simyield FARM!
ENTRY! 12345678Grand total
12.03 1.70 1.86
23.39 2.43 2.63 2.82
31.51 2.66 2.11 1.81 2.02
44.97 3.36 4.01 4.11
50.28 0.29 1.55 0.74 0.72
63.06 2.35 1.96 2.45
70.45 1.82 1.13
82.00 2.00
90.00 1.77 0.89
10 4.47 3.15 3.81
11 2.06 2.40 1.02 1.83
12 1.40 1.79 0.40 1.20
Grand total 2.01 0.68 3.72 1.72 2.54 2.60 1.89 1.23 2.05
These Excel tables can be rearranged to clarify important information, for example,
sorting by mean may make the table easier to read:
SITE! 1
Average of
simyield FARM!
ENTRY! 12345678Average
44.97 3.36 4.01 4.11
10 4.47 3.15 3.81
23.39 2.43 2.63 2.82
63.06 2.35 1.96 2.45
31.51 2.66 2.11 1.81 2.02
82.00 2.00
12.03 1.70 1.86
11 2.06 2.40 1.02 1.83
12 1.40 1.79 0.40 1.20
70.45 1.82 1.13
90.00 1.77 0.89
50.28 0.29 1.55 0.74 0.72
Average 2.01 0.68 3.72 1.72 2.54 2.60 1.89 1.23 2.05
account for expected variation due to
differences between farms and sites, and
the design used in the trial ended up
with a rather irregular distribution of
varieties across farms and sites. For
example, in site 1 (see Excel table) entries
The formal analysis of this data is
needed to give means corrected for site
and farm effects, together with correct
standard errors of differences. The usual
starting point would be an analysis of
variance; however, the analysis has to
Richard Coe
25
occur between 1 and 5 times. The design
is described as unbalanced (differing
amounts of information about each
treatment comparison) and treatments
are non-orthogonal to farms and sites.
The latter implies that treatment means
adjusted for site and farm effects are
more realistic summaries of treatment
differences than raw means.
The need for some sort of adjustment is
evident from the Excel table for site 1.
Entries 5, 7, and 9 have low means;
however, they all occur on farm 2, which
may be a poor farm, hence depressing
the means for these entries. Calculation
of these adjusted means is described
below. The results, which only include
data for site 1, show that the ranking of
entries is changed considerably, but the
logic of the changes is visible if
compared with the data. For example,
entry 1 has the lowest adjusted mean.
The raw data shows that this entry
appeared on just two farms, both of
which seem (compared to the
performance of other entries) to be good.
Entry Raw mean Adjusted mean
44.11 3.48
10 3.81 2.94
22.82 2.46
62.45 2.57
32.02 2.16
82.00 3.20
11.86 1.00
11 1.83 1.88
12 1.20 1.32
71.13 1.83
90.89 1.48
50.72 1.03
The adjusted means are found by fitting
a model with farm and entry effects.
This model can be used to predict the
performance of each entry on each farm,
and the adjusted mean is then the
average of these predictions across all
the farms. The commands to do this in
Genstat are simple, the last one being
needed to obtain the standard errors of
differences between adjusted means. The
results below are for the whole data set,
not just site 1.
model simyield
fit [p=*] FARM+ENTRY
predict ENTRY
rpair !P(ENTRY)
Response variate: simyield
ENTRY Prediction S.e.
11.234 0.107
22.878 0.111
32.612 0.111
43.328 0.107
51.483 0.108
63.305 0.108
71.834 0.107
82.423 0.108
91.488 0.118
10 3.409 0.107
11 3.167 0.107
12 1.667 0.107
796 rpair !P(ENTRY)
Analyzing Data from Participatory On-Farm Trials
26
***** Pairwise Differences *****
***** Regression Analysis *****
Response variate: simyield
Fitted terms: Constant + FARM + ENTRY
Standard errors of pairwise differences
1*
20.1549 *
30.1560 0.1568 *
40.1534 0.1569 0.1561 *
50.1560 0.1561 0.1574 0.1528 *
60.1543 0.1548 0.1564 0.1547 0.1535
70.1535 0.1574 0.1561 0.1511 0.1562
80.1533 0.1587 0.1570 0.1542 0.1565
90.1565 0.1613 0.1608 0.1618 0.1617
10 0.1524 0.1565 0.1599 0.1490 0.1486
11 0.1557 0.1531 0.1518 0.1506 0.1518
12 0.1494 0.1544 0.1541 0.1548 0.1544
12345
6*
70.1538 *
80.1550 0.1512 *
90.1621 0.1600 0.1612 *
10 0.1536 0.1500 0.1494 0.1639 *
11 0.1516 0.1531 0.1550 0.1643 0.1549
12 0.1524 0.1523 0.1525 0.1605 0.1543
678910
11 *
12 0.1562 *
11 12
Note that the sed values are not all the
same due to the irregularity in the
design; however, they are close enough
for it to make sense to quote a single sed
of 0.16.
If these adjusted means are compared
with the raw means, the differences are
not as great as when we analyzed just
one site. The means are averages over a
greater number of farms, so the effects of
“good” and “bad” farms on individual
means tend to cancel out.
Entry Raw mean Adjusted mean
10 3.45 3.41
63.37 3.31
43.30 3.33
11 3.10 3.17
23.08 2.88
32.71 2.61
82.43 2.42
71.76 1.83
12 1.60 1.67
91.44 1.49
51.32 1.48
11.28 1.23
Richard Coe
27
In this case the model could also have
been fitted as:
model simyield
fit [p=a] SITE/FARM+ENTRY
***** Regression Analysis *****
*** Accumulated analysis of variance ***
Change d.f. s.s. m.s. v.r.
+ SITE 24 189.0435 7.8768 16.57
+ SITE.FARM 121 327.6509 2.7079 5.70
+ ENTRY 11 289.1360 26.2851 55.28
Residual 427 203.0184 0.4755
Total 583 1008.8488 1.7304
This analysis of variance can be
interpreted in the usual way, and shows
that some of the between-farm variation
actually occurs between sites. In other
words, farms within a site tend to be
more similar than farms on different
sites, as expected.
The analysis presented above is valid;
however, it does not capture all of the
information in the data and hides some
of the structure. An alternative approach
is to treat sites and farms within sites as
if there were a random selection from
those available, and to use a model that
describes this. REML procedures handle
these problems and are easy to use in
Genstat.
VCOMPONENTS [FIXED=ENTRY] RANDOM=SITE/FARM
REML[PRINT=model,components,waldTests,means;
PSE=differences] simyield
The option FIXED=ENTRY specifies that we
want to estimate separate means for each
of the entries. The parameter
RANDOM=SITE/FARM tells Genstat that
there are sites that are expected to vary
and there are farms within each site that
also vary. Genstat automatically adds the
plot level or residual variance, but this
could be explicitly put in if the data set
had another factor labeled PLOT by
specifying RANDOM=SITE/FARM/PLOT. The
output is shown below.
Note that the trial was originally
planned with a “replicate” being a set of
all of the varieties (spread across three
farms) with three replicates per site.
However, due to a lack of available land
as well as some mistakes, this is not how
the design was implemented. Replicates
therefore do not correspond to any
physical source of variation in the
experiment, and thus it does not make
much sense to include them in the
analysis. On the other hand, both sites
and farms correspond to physical layout
factors that could reasonably be expect
to influence results, so these must be
allowed for.
***** REML Variance Components Analysis *****
Response Variate : simyield
Fixed model : Constant+ENTRY
Random model : SITE+SITE.FARM
Number of units : 584
* Residual term has been added to model
*** Estimated Variance Components ***
Random term Component S.e.
SITE 0.2516 0.0992
SITE.FARM 0.3535 0.0616
Analyzing Data from Participatory On-Farm Trials
28
*** Residual variance model ***
Term Factor Model(order) Parameter Estimate S.e.
Residual Identity Sigma2 0.475 0.0325
*** Wald tests for fixed effects ***
Fixed term Wald statistic d.f. Wald/d.f. Chi-sq prob
* Sequentially adding terms to fixed model
ENTRY 663.07 11 60.28 <0.001
* Message: chi-square distribution for Wald tests is an asymptotic approximation (i.e.,
for large samples) and underestimates the probabilities in other cases.
*** Table of predicted means for Constant ***
2.455 Standard error: 0.1165
*** Table of predicted means for ENTRY ***
ENTRY 1 2 3 4 5 6 7 8
1.308 2.984 2.681 3.369 1.495 3.377 1.858 2.478
ENTRY 9 10 11 12
1.528 3.469 3.205 1.704
Standard error of differences:
Average 0.1510
Maximum 0.1585
Minimum 0.1457
Average variance of
differences: 0.02281
The first part of the output reports
variance components, which are
interpreted in the next section.
The Wald test is equivalent to the F-test
for treatment effect in the usual anova.
The “highly significant” effect says that
there are real differences between these
12 variety means.
The table of predicted means gives
means for each entry adjusted for farm
and site effects. In this case most of the
means are close to the unadjusted
means, however, this will not always be
so. The adjustments allow for the fact
that some farms are better (produce
higher average yields) than others.
Entries that are tested on “good” farms
will have their means biased upwards
compared with entries tested on “bad”
farms. In this design each entry is tested
on about 50 farms, so the good and bad
farms tend to cancel out; however, if
there were fewer farms this would not
be the case. The predicted means are
those that should be reported and
interpreted, not the raw means
presented earlier.
The sed values for comparing predicted
means are not all equal, so Genstat
reports the minimum, maximum, and
average. They are not equal because
different pairs of means are compared
with different precision. For example,
counting shows that entries 1 and 2
occur together on the same farm 14
times, whereas entries 9 and 10 occur
together on the same farm only 5 times.
Richard Coe
29
We would therefore expect the sed for
comparing entries 1 and 2 to be lower
than that for comparing 9 and 10. In this
case the range in sed values is not large,
so we do not go far wrong if the average
(or, more conservatively, the maximum)
is used.
The output does not contain information
that indicates which entries differ from
each other; it only shows that there are
some overall variety differences. We
have not included any information
about possible differences between
entries in the analysis, so the only
possibility would be an analysis based
on ignorance, for example, one with
letters attached to varieties deemed to be
not significantly different from each
other. There are both technical and
philosophical problems with this
approach and it should be avoided.
Suppose that the entries came from three
groups, depending on pedigree, as
follows:
Group a b c
Entry 1, 5, 7, 9, 12 4, 10, 11 2, 3, 6, 8
Then we can look for differences
between and within groups by replacing
the fixed model by FIXED=GROUP/ENTRY.
*** Wald tests for fixed effects ***
Fixed Wald
term statistic d.f. Wald/d.f. Chi-sq
prob
* Sequentially adding terms to fixed model
GROUP 602.80 2 301.40 <0.001
GROUP.ENTRY 60.27 9 6.70 <0.001
* Message: chi-square distribution for Wald
tests is an asymptotic approximation (i.e.,
for large samples) and underestimates the
probabilities in other cases.
*** Table of effects for GROUP ***
GROUP a b c
0.000 2.061 1.676
Standard error of differences: Average
0.1506
Maximum 0.1521
Minimum 0.1490
The Wald tests show that there is
considerable variation between groups
of entries, but still some remaining
variation between entries within a
group. The table of effects for GROUP
summarizes the difference between
groups—entries in group b have mean
yields 2.06 higher than those in group a.
Comparing approaches
In Example 1 we based an analysis of the
difference between yields of two
treatments on either a linear model or
the set of difference within each farm.
The two methods produced almost
identical results. So why not use the
difference method illustrated in
Example 2? Some of the reasons are
discussed below.
Of the three treatments in Example 1,
there are three pairs of treatments that
could be used to form differences, hence,
we might repeat the analysis three times.
These analyses are not independent but
that does not matter. However, with the
12 treatments in Example 2 there are 12 x
11/2 = 66 pairs that we could choose to
make differences. Analysis of all these
would not only be tedious, it would
involve a lot of repetition (there are only
11 df in 12 treatments). But which subset
of pairs should be chosen?
The set of treatments on any farm is
small—only 4 out of 12. Thus, for
example, treatment 1 occurs on 50 farms
and treatment 2 on 47, yet they occur
Analyzing Data from Participatory On-Farm Trials
30
together on only 14. So if we work with
the entry 1-entry 2 difference, we would
use data from just 14 farms. However
there is a lot more information about the
two treatments that is reflected in the
differing sed values from the two
approaches. Modeling gave a sed of
0.155 for entry 1-entry 2 and the
difference method gave a sed of 0.180.
This difference may seem small but
equates to a 42% increase in trial size.
Other limitations of the difference
methods will be described later.
The difference between the two analyses
(i.e., between the analysis that takes
farms and sites as fixed and the REML
analysis, which takes farms and sites as
random) lies in what can be reasonably
assumed about farm and site differences.
If they are slightly different, but we can
make no realistic assumptions about the
nature of those differences, then they
should be considered fixed. This means
that each site or farm has its own
characteristic mean, unconnected with
any other, which has to be estimated.
Information on treatment differences
then comes from differences within each
farm. However, if sites or farms can be
considered a random sample from the
set of possible sites or farms, and have
effects which roughly follow a normal
distribution, then we estimate the
variance of that normal distribution.
This changes the estimates of the
treatment effects because between-farm
and between-site information is
recovered. The source of this
information can be understood as
follows: if all farms that had treatment 1
had a high mean, and all those that had
treatment 2 had a low mean, it could be
concluded that treatment 1 is better than
treatment 2. If farms really are a random
sample, however, then treatment 1 is
unlikely to end up on all of the best
farms by chance. Hence some
information from the farm effects needs
to be added to our evidence that
treatment 1 has a higher mean than
treatment 2. The REML method
combines this information with the
within-farm information, which
modifies the estimates of treatment
effects and sed values compared with
the earlier fixed effect analysis. If the
assumptions of the random site and
farm effects are realistic, then this
analysis will always be more efficient.
Understanding Variation
and Genotype x
Environment Interaction
Example 2
The analysis above has produced
estimates of variance components as
follows:
Component Estimate Std error
SITE 0.2516 0.0992
FARM 0.3535 0.0616
PLOT or residual 0.4750 0.0325
What do these tell you?
The model used to analyze the data, as
specified in the VCOMPONENTS command, is:
yield = mean + site effect + farm effect +
variety effect + residual
The residual is thus the deviation of an
individual plot yield from the average
for that site, farm, and variety. It
encompasses all of the unexplained
variation from plot to plot, due to local
environmental effects (soil, pests),
management, measurement error, and so
on. The variance of 0.475 means that the
Richard Coe
31
standard deviation of this plot-to-plot
variation is (0.475) = 0.698. If the data have
an approximately normal distribution, then
most observation lie within 2 sd of the
mean. Thus the plot-to-plot variation
represents variation of approximately
±1.4 about the mean for a farm growing a
uniform variety. This is a typical level of
variation in such trials.
The farm variance can similarly be
interpreted. It shows how much the
average yield for a very large number of
plots varies between farms within the
same site.
Explaining variation—interaction
and risk
Example 1. In the last section we
analyzed Example 1 by taking the 31
differences in yield for g-c and looking at
their mean and variation. Here I take this
analysis further.
The mean difference of 1.01 t/ha is
naturally of interest in some analyses, and
this is the quantity most often reported,
together with a proud statement that it is
“significantly greater than zero”. This is
not of interest, however, to an individual
farmer. A farmer’s decision on whether to
use g rather than c will depend on many
things, whereas the yield component of
the decision will be based on the yield
increase he/she might achieve on his/her
farm. In the absence of any other
information, the mean is the best estimate
of what this might be, but there is, of
course, a lot of variation around the
mean. This variation is an indication of
the level of risk associated with a mean-
based decision. In the figure below, the
risk of obtaining a yield increase less than
any specified amount is plotted. There is
an approximate 10% chance that a farmer
will achieve a lower yield based on g
rather than c and a 55% chance of
achieving an increase of less than 1 t/ha;
however, 20% of farmers achieved an
increase of more than 2 t/ha. A simple
model for the variation is obtained by
assuming a normal distribution, also
shown on the graph. It is not a
particularly good fit but still has some
value, which is explained later. Note that
if there were many more than 31 farmers
in the study, we would expect a better
(more precise) estimate of the mean
difference between g and c, but no
reduction in the variation in this
difference across farms. More farms
would give a better estimate of the
chance of achieving a lower yield with g
than c, but it would still be around 10%.
Knowing what distinguishes a +2 ton
farmer from a +0 ton farmer is important,
both for the farmer’s decision making
and the researcher’s understanding.
1
0.8
0.6
0.4
0.2
0
-1 012 3
g-c (t/ha)
Probability
Data
Model
Analyzing Data from Participatory On-Farm Trials
An approach to the problem should be
clear. We have a set of 31 differences and
we want to know what determines them.
Hypotheses of possible causes may come
from farmers or researchers. The
hypotheses are tested by collecting
suitable data and statistical analysis. In
Coe 2 (this proceedings), slope and cec
32
0.000
1000
2000
-0.5 0 0.5 1 1.5 2 2.5
g-c
(cation exchange capacity) are
hypothesized causes of the variation in
this example, so we can explore evidence
for this in the data. Slope, in this case, is
a categorical variable. The boxplot below
shows little evidence of a consistent
difference in the size of g-c for different
slope categories. The scatter plot of g-c
against cec does not show a clear
relationship, but does show some
outlying points that could be followed
up. For example, the farm in the top
right of the scatter used fertilizer, which
suggests further ideas for investigation.
A formal statistical analysis would now
use usual regression modeling
approaches to quantify any effects. If yij
is the yield on farm i under treatment j,
then the differences being analyzed are:
di = yig - yic
with variance σd2. This is the variation
reflected in the above graph and in the
simple risk model.
A r egression model to look at the effect of
a farm level covariate x would then be:
di = c + bgcxi + ei
Here bgc is the regression effect when
considering the g-c difference and the
residual ei. The variance of the residual
is σr2. This measures the still
unexplained variation in d, or the risk
still remaining with knowledge of the
covariate. Again, if a normal
distribution model is acceptable, then
the parameters of the regression model
with σ r2 allow predictions of the risk of
yield changes associated with switching
from c to g conditional on the value of
the covariate.
The usual analysis of variance model
for this data, with treatments and farms
in the design, would be:
yij = c + fi +tj + eij
with the variance of these residuals σ2.
Then the g-c differences are:
di = tg - tc + eig - eic
The connection between the analysis of
variance approach and the analysis of
plotwise differences now becomes clear:
the variance of the differences σd2 = 2σ2.
The effect of the covariate could be
included in the analysis of variance
model as:
yij = c + fi + tj + bj xi + eij
Slope
2.5
2
1.5
1
0.5
0
-0.5
34 56 7 8 9
cec
g-c
Richard Coe
Note that what we are doing here is
identifying GEI, where the G is the three
treatments and E is characterized by
slope and cec.
33
The term bjxi describes how the treatment
effect is modified on farms of different
types (i.e., with different values of the
covariate x). It is thus a treatment by farm
interaction and is often the basis of the
most useful results from an on-farm trial.
With information on such interactions we
can refine predictions and
recommendations and reduce the risk
associated with decisions based on the
data. The covariates useful for this may
be social variables (gender, household
size, etc), biophysical variables (soil type,
slope, etc), or management variables
(weeding, planting time, etc).
Note that a common misunderstanding
in experimental design is that farm x
treatment interaction cannot be detected
if only a single replicate is placed on each
farm. The types of farm x treatment
interaction that are important are those
that are structured to show consistent
patterns across farms. These can be
explained and predicted in terms of
explanatory variables, and can be
estimated from designs with no more
than one replicate per farm, as shown
here, though this does not mean that
design is unimportant. Also, more
effective designs can be used if it is
known which covariates will be of
interest before the trial starts.
The analysis above identifies and
describes what has always been known
by breeders as GEI. The classical
approach to this has been a “complete”
trial in a number of locations, each
representing different environments.
Once a variety x location interaction is
detected, an attempt is made to find
which aspects of the environmental
variation are responsible for the
interaction. The approach used here
allows GEI to be detected and described
when only a subset of the genotypes is
tested in a large number of locations,
each genotype in an unreplicated trial.
The approach does require that the
locations be characterized by
measurement of appropriate covariates.
One reason for undertaking participatory
breeding trials is that critical GEI is due
to varying social or economic
environments. For example, it is often
hypothesized that men and women will
favor different varieties, or that farmers’
assessment of genotypes will depend on
level of market integration. These types
of interaction can be detected and
described as long as the design covers
sufficient variation, and suitable
indicators of the social or economic
variables are recorded.
Summary
The key points made in this paper are:
Analysis of data from participatory
trials can and should use a combination
of exploratory/descriptive methods and
formal statistical modeling.
The analysis may be complicated by the
irregular layout of the experiment and
multiple layers of variation introduced
by the hierarchical design.
Approaching the analysis by calculation
of treatment contrasts on each farm can
simplify many complex problems and
lead to new insights into the data;
however, it can be inefficient or too
repetitive if there are many treatments.
Approaching the analysis by fitting
regression models or their equivalent
with multiple error terms allows many
designs to be analyzed within a
common framework; however, the
analysis can be opaque and estimates
non-intuitive.
Analyzing Data from Participatory On-Farm Trials
34
The two approaches can often be made
to equate.
The most useful analysis is often one
that concentrates on finding
explanation for variation in treatment
effects across farms.
•Variation (at any level in the design)
can be interpreted as risk, not just as
unexplained noise.
References
Coe, R., and S. Franzel. 2000. Keeping
research
in participatory
research. Presented at the 3rd International PRGA Conference,
6-11 November 2000, Nairobi, Kenya.
Coe, R., R. Stern, and E. Allen. 2001.
Analysis of data from
experiments. Training materials
. Nairobi: SSC/ICRAF. 250 pp.
Genstat. 2000.
Genstat for Windows
. 5th Ed. Oxford: VSN
International.
Hildebrand, P.E., and J.T. Russell. 1998.
Adaptability analysis
. Iowa,
U.S.A.: Iowa State University Press.
Kempton, R.A., and P.N. Fox. 1997.
Statistical methods for plant
variety evaluation
. London: Chapman and Hall.
Richard Coe
35
Discussion Summary
The discussion following the presentation dealt with questions on data analysis, analysis of genotype x
environment interaction (GEI), farmers’ involvement in trials, and the statistical packages available to
analyze results. In terms of data analysis, a common problem is the variation in the number of times a
given entry is included in a trial (e.g., one to four). In other words, if the performance data for a variety
was recorded only once, should this information be eliminated? The answer is no: the alpha lattice
method (REML) makes an adjustment following assessment of the robustness of different data points,
and the resulting adjusted means are more robust. It is also important to include zero as a response if,
for example, the plot matured but there was no yield, but not if it did not yield due to external factors.
Sensitivity tests can be run to determine the course of action with respect to outlying data points. An
analysis can be run with and without these, but if the data point is very influential, the cause needs to
be considered, as it may be necessary to repeat the trial. Another question was if participatory varietal
selection (PVS) trials consist of two entries (one of which is the local check), could adjustments be
made with respect to this control? The answer is no, since this would build uncertainty into the results
because the performance of the check is variable. A related question was raised on how to use the
differences in performance (yield) between entries and a control? And can these differences be used as
a comparison across varieties? Are there guidelines to use the differences? The answer is that it is
necessary to ask, “What can I see in the set of differences?” For example, look at the average and the
size of the differences, and use graphics that allow the visualization of the results.
The issue of GEI is very important. A complete table of environments, farms, and sites (locating and
enabling interpretation of crossover effects) is more appropriate for studying GEI. There are many tools
that can be used to address this issue, but first it is important to know what constitutes the
environment. This can be done using covariates, which also allow better hypotheses testing and
interpretation. It was pointed out that most treatments overfit the data by making each trial a different
environment; however, a trial
samples
a population of environments, it is not an environment. Hence,
trials should be grouped according to similarities, and the resulting groups used as the environments
for the analysis.
Farmer involvement in the interpretation and analysis of trials helps in two ways: it puts the
information in context and provides useful explanations of the results. This can be achieved with a
farmer focus group, where the results are presented and discussed. An important question is how to
present the results to farmers, particularly when the trials are very extensive and located over a large
area. This may require the involvement of local extension workers and simple representation of results
for analysis. There was discussion on whether to use simple tables, charts, or even a physical
representation of yield, e.g., bags. Bags can be cumbersome, and it was noted that tables are usually
easier for farmers to interpret than charts. It is very important that farmers understand the purpose of
the trial and what is being assessed—some sort of training may be required. Lack of understanding
may lead to the generation of inaccurate or unimportant information. Worse still, it may lead to
inappropriate actions by farmers which may invalidate the experiment, for example, by spraying one
plant to protect it, when the purpose of the experiment was to assess the resistance of two varieties to
a pest or pathogen.
The final discussion point centered on the availability of statistical programs and tools for breeders from
national agricultural research programs to conduct analyses. Many of the available analysis programs
are expensive, although countrywide licensing may be possible. It is important to assist the national
programs in accessing affordable software. Further training in the software may also be required.
Analyzing Data from Participatory On-Farm Trials
36
Sources of Variation in
Participatory Varietal Selection
Trials with Rainfed Rice:
Implications for the Design of
Mother-Baby Trial Networks
G
ARY
A
TLIN
, T
HELMA
P
ARIS
,
AND
B
RIGITTE
C
OURTOIS
Introduction
Breeders of rainfed rice in eastern India
recognize the need to introduce
participatory methods into their variety
testing systems to improve the
effectiveness of breeding programs.
Performance in farmers’ fields and in the
breeder’s nursery can be thought of as
correlated traits expressed by a single
genotype in separate environments.
Theory developed by Falconer (1989)
and extended to the analysis of plant
breeding programs by Pederson and
Rathjen (1981) and Atlin and Frey (1989;
1990) permits breeding strategies to be
evaluated on the basis of the predicted
response in the target environment
resulting from selection conducted in a
breeding nursery. When selection is
among pure lines, this response may be
modeled using the formula:
Abstract
Little information has been published on the repeatability of participatory varietal selection
(PVS) trials. Repeatability estimates, which can be derived from the combined analysis of
trials over locations and years, are useful for determining the number of replications and the
optimal blocking structure of PVS trials. Variance components were estimated from a series
of upland and lowland PVS trials conducted in the states of Jharkand and Bihar in eastern
India, and used to estimate the repeatability of means. In both sets of trials the cultivar x site
x year variance component was larger than the cultivar x site component, indicating that
there was little specific adaptation to sites within the trials series. Participatory varietal
trials conducted on-farm under farmer management were quite repeatable; replication over 5
sites was predicted to result in a repeatability of more than 0.5 in both data sets. Simulation
indicated that a modest benefit is likely from the use of alpha-lattice designs when among-
farm variances are large in experiments conducted using the mother-baby design, which
treats farms as incomplete blocks.
37
CRT = iSrG H SHTσP(1)
where CRT is the correlated response in
the target environment (farmers’ fields)
to selection in a breeding nursery; is is
the standardized selection differential
applied in the selection nursery; rG is the
genotypic correlation between cultivar
yields in the selection and target
environments; HS and HT are
repeatabilities or broad-sense
heritabilities in the selection and target
environments, respectively; and σP is the
phenotypic standard deviation in the
target environment. When response is
being predicted for a particular target
environment, HT and σP may be
considered constants. Therefore:
CRTirG HS(2)
Inspection of this relationship indicates
three important considerations for
designing breeding programs for stress
environments:
1.i must be maximized by screening large
populations, permitting a high selection
intensity to be achieved.
2. rG (or accuracy) must be maximized by
ensuring that performance in the
selection environment or screening
system is highly predictive of
performance in the target stress
environments.
3. A high level of HS (or precision) must
be achieved, typically through
replicated screening.
One reason for the poor performance
characterizing many conventional
rainfed rice breeding programs is that
the research conditions are not reflective
of on-farm conditions; in other words, rG
is low. In participatory varietal selection
(PVS) programs, the genetic correlation
between performance in the selection
and target environments is very high,
since selection is conducted in farmers’
fields (Atlin et al. 2001). Therefore, the
main factor affecting response to PVS in
programs of a particular size is HS.
However, the scale and design of PVS
schemes needed to achieve acceptable
HS levels is unknown, because no
information has been published on the
extent of farm to farm variation in
cultivar performance in PVS
experiments. Variance component
estimates from the analysis of PVS trials
over locations and years can be used to
estimate H of means for grain yield and
other agronomic characteristics resulting
from a given number of sites and years
of testing. These estimates can be used to
determine the scale of testing needed to
achieve adequate precision from PVS
trials and the best method of analysis for
PVS programs using the mother-baby
model, which treats individual farms as
incomplete blocks.
The International Rice Research Institute
(IRRI) has conducted PVS trials in
rainfed rice over three years in several
villages in eastern India. The original
objective of these experiments was to
compare varietal rankings within and
among groups of farmers and breeders
(Courtois et al. 2001), but the trials also
provide information on the sources of
variation for agronomic traits in PVS
trials conducted with rainfed rice. This
report presents variance components
estimated from the combined analysis of
on-farm PVS trials over farms and years
in two regions in eastern India and their
use in estimating the repeatability of
means from rainfed rice PVS trials. The
implications of these estimates for the
design of mother-baby trial networks are
considered.
Sources of Variation in Participatory Varietal Selection Trials with Rainfed Rice
38
Mother-baby PVS trial networks are now
being planned or implemented by
several research groups in India. The
mother-baby design has two
components: the mother trial, in which a
complete set of cultivars is evaluated in
replicated researcher-managed trials at
several locations; and the baby trials,
wherein farmers each evaluate a subset
of the cultivars tested in the mother trial.
Villages and farms within villages may
be considered separate blocking strata
within a mother-baby trial. Variation in
mean yield among farms within villages
is expected to be substantial. This
variation contributes to the variance of
cultivar means when farms are used as
incomplete blocks, and can be controlled
to some extent by designs that control
within-block variation, such as the
alpha-lattice design. In establishing these
trials, we have found that the lack of
easily accessible software for the analysis
of alpha-lattice designs is a serious
constraint. Sets of baby trials may be
analyzed as randomized-complete-block
(RCB) design or completely randomized
designs, but if among-farm variance is
large, losses of precision resulting from
selecting on the basis of unadjusted
cultivar means are likely to be great. To
test this hypothesis, a simulation
exercise was also conducted to examine
the impact of yield variation among
villages and among farms within
villages on the relative effectiveness of
alpha-lattice and RCB design analyses.
Methods
Variance component estimation in
participatory varietal selection trials in
rainfed rice. Participatory variety
selection trials were conducted under
farmer management in three eastern
Indian districts in 1997-2000. Upland
cultivar trials were conducted in three
villages in southern Bihar (now
Jharkand) in collaboration with the
Central Upland Rice Research Station
(CRURRS), Hazaribag. Lowland PVS
trials were conducted in collaboration
with Rajendra Agricultural University
(RAU), Pusa, Bihar. In each set, several
varieties were evaluated in unreplicated
trials on three or four farms over at least
two years. Details of the trials are
presented in Table 1; however, they are
more completely described by Courtois
et al. (2001). Grain yield data were
analyzed using the REML algorithm of
SAS PROC VARCOMP with a cross-
classified model, with cultivars, farms,
and years as random factors. Broad-
sense heritability or repeatability (H)
was estimated as:
H = σ2G /{σ2G + (σ2GL/l) +
(σ2GY/y) + (σ2GLY/ly)} (1)
where σ2G, σ2GL, σ2GY, and σ2GLY are the
genotype, genotype x location, genotype
x year, and genotype x location x year
variance components, respectively, and l
and y are the number of locations and
Table 1. Description of participatory varietal selection trials in eastern India.
Cooperating No. of No. of No. of Mean
Location institution
Ecosystem years locations genotypes yield (t/ha)
Hazaribag CRURRS Upland 3 3 12 1.96
Pusa RAU Lowland 2 3 9 4.21
† CRUURS = Central Upland Rice Research Station; RUA = Rajendra Agricultural University.
Gary Atlin et al.
39
the number of years, respectively. It
should be noted that when estimated
from unreplicated trials, the σ2GLY
component also contains the within-trial
plot error or residual variance.
Simulating the predictive power
of mother-baby trials analyzed
as randomized-complete-block
versus alpha-lattice designs
A simulation was conducted using the
following model:
Pijklm = M+Yi +Vj +YVij +F(YV)k(ij) +Gl
+GYli +GVlj + GYVlij + eijklm (1)
where:
Pijklm =the measurement on a plot
containing genotype l on farm
k in village j in year i
M=the overall mean of the trials
Yi=the effect of year i
Vj=the effect of village j
VYij =the interaction between year i
and village j
F(VY)k(ij)=the interaction between year i
and village j and farm k
Gl=the effect of genotype l
GYli =the interaction between
genotype l and year i
GVlj =the interaction between
genotype l and village j
GVYlij =the interaction between
genotype l, year i, and village j
eijklm =the within-village residual
The SAS program was used to simulate
values for P, assuming all factors
random in the model. An overall mean
(M) of 2.2 t/ha was assumed. Effects
were generated with the SAS RANNOR
function, using the appropriate
variance components as function
arguments. Variance components used
in the simulation were taken from the
literature or from analyses of rice
variety trial data available at IRRI.
Three scenarios were identified
regarding the relative magnitudes of
the GYV and F(GYV) variances. In one
scenario, there was little variation
among farms within villages in mean
yield, but considerable variation across
villages. In another scenario, there was
little variation among farms within
villages, but substantial mean yield
differences among villages. In the third,
variance among villages and among
farms within villages was
approximately equal in magnitude. (It
should be noted that other estimates
might lead to different simulation
results.) The variance components used
in the simulation (listed below) are
based on estimates derived from the
combined analysis of the Philippine
Upland Rice National Cultivar Trials
for 1997-99:
σ2 Y= 2700
σ2 V= 5000
σ2 YV =800000 or 500000 or 200000
σ2 F(YV) = 200000 or 500000 or 200000
σ2 G= 44600
σ2 GY = 39000
σ2 GV = 5000
σ2 GYV = 300000
σ2 e= 100000
Single-replicate PVS trials testing a set
of 16 cultivars in 3, 5, or 10 villages
were simulated, with 4 cultivars per
block. Alpha-lattice designs generated
Sources of Variation in Participatory Varietal Selection Trials with Rainfed Rice
s
40
by the Alphagen program were used.
Cultivar means over villages and farms
were calculated in three ways:
1. Raw means were calculated over all
villages and farms.
2. Data were standardized within farms
and then means were calculated over
farms and villages.
3. Means adjusted for lattice incomplete
block effects were calculated using the
REML option of SAS PROC MIXED,
with genotypes considered fixed and
all other effects random.
The simulation for each of the 9
conditions (3 experiment sizes x 3
estimators of variety means) was
replicated 10 times. For each run, the
correlation between simulated genotypic
values and simulated cultivar mean
yields was calculated. This correlation,
equivalent to the square root of the
heritability of cultivar means, is an easily
understood measure of the repeatability
of cultivar trials, and is more directly
related to their predictive power than is
the variance of cultivar means.
Results and Discussion
Variance component estimation
in participatory varietal selection
trials in rainfed rice
Variance components are presented in
Table 2. The relative magnitude of these
components varied greatly from trial to
trial. For the upland target environment
(TE), site variance was the largest
component, reflecting the large range in
soil quality among sites. For the Pusa
rainfed lowland TE, year to year
variances were large. Cultivar effects
were significant in two of the three TEs.
Cultivar x year interactions were small
for all three TEs. Cultivar x site
interactions were also relatively small
for all three TEs, indicating that cultivars
responded similarly across sites within
TEs. The residual error for the combined
analysis, which contains both the
cultivar x year x site and within-site
residuals, was large in all cases,
indicating that within-site soil
heterogeneity and/or random variation
in cultivar ranking among sites and
years were the most important sources
of noise in the trials.
Using the variance components in Table
2, repeatability estimates were calculated
for means estimated from 1, 2, 5, or 10
trials for the 2 trial sets in which
genotypic variation for grain yield was
significant (Table 3). In both cases,
means estimated from a single trial had
very low repeatability. Replication over 5
sites increased predicted repeatability to
more than 0.5 in both data sets.
In summary, these experiments indicate
that specific adaptation to sites within
the TEs served by the CRURRS and RAU
breeding programs appears to be
limited. Site to site and year to year
variability among PVS trials was large,
Table 2. Variance component estimates from participatory varietal selection trials in eastern India.
Location σσ
σσ
σ
2 Y
σσ
σσ
σ
2 L
σσ
σσ
σ
2 YL
σσ
σσ
σ
2G
σσ
σσ
σ
2GY
σσ
σσ
σ
2GL
σσ
σσ
σ
2GLY
Hazaribag 0.02 1.03 0.00 0.13 0.00 0.04 0.29
Pusa 1.36 0.13 0.08 0.20 0.15 0.01 0.20
Gary Atlin et al.
41
but rank changes across sites were
limited. Replication of trials over 3-5
sites or farms may be sufficient to
achieve useful levels of repeatability in
PVS trials.
Simulating the predictive power
of mother-baby trials analyzed
as randomized-complete-block
versus alpha-lattice designs
The results of the simulation are
presented in Table 4. For trials
comprising 3 village replicates, the
correlation between genotype value and
cultivar means estimated from lattice-
adjusted data ranged from 0.45 to 0.51.
The correlation increased to
approximately 0.6 when the number of
village replicates increased from 3 to 5,
but no increase was observed from
increasing the number of villages from 5
to 10. If the variances used in this
simulation are representative of rainfed
rice trials in eastern India, mother-baby
networks consisting of as few as 3-5
village replicates may be adequate for
progress from PVS to be made. For all
three ratios of σ2VY : σ2F(VY) and all trial
sizes, the correlation between genotypic
value and the means estimated from
trials was greater for lattice-adjusted
means than for raw means.
Standardization within farms did not
consistently improve the relationship
between phenotypic and genotypic
value. The increase in selection response
resulting from the use of lattice designs
is expected to be approximately rlattice/
rraw, where rlattice is the correlation
between genotypic value and cultivar
means estimated with lattice adjustment,
and rraw is the correlation for raw means.
This ratio is roughly equal to the
selection responses that can be expected
from lattice adjustment, relative to the
analysis of raw means. rlattice/rraw was
approximately 1.1-1.3 for all simulations,
indicating that lattice adjustment may be
advantageous even when the number of
village replicates is quite large if there is
considerable variation in the mean
yields of farms.
Conclusions
Participatory varietal selection trials
produce repeatable estimates of rainfed
rice cultivar means. In the experience of
the authors, the repeatability of grain
yield estimates from the farmer
managed trials was not markedly lower
Table 4. The effect of trial number, method of
estimating means, and the ratio σσ
σσ
σ2 VY:σσ
σσ
σ2 F(VY) on the
correlation between genotypic and phenotypic values
in simulated mother-baby trials, eastern India.
Estimation method
Lattice-
Raw Standardized adjusted
No. of means means means
trials σσ
σσ
σ
2 VY
: σ σ
σ σ
σ
2F(VY)
rr r
4:1 0.43 0.39 0.45
31:1 0.37 0.48 0.48
1:4 0.38 0.46 0.51
4:1 0.57 0.56 0.64
51:1 0.63 0.69 0.72
1:4 0.61 0.58 0.63
4:1 0.54 0.60 0.64
10 1:1 0.59 0.64 0.67
1:4 0.63 0.62 0.67
Table 3. Predicted repeatability (H) of cultivar means
estimated from 1, 2, 5, or 10 unreplicated on-farm
trials conducted in a single season in eastern India.
H
Location 1 site 2 sites 5 sites 10 sites
Hazaribag 0.28 0.44 0.66 0.80
Pusa 0.36 0.44 0.51 0.54
Sources of Variation in Participatory Varietal Selection Trials with Rainfed Rice
42
than for on-station trials. It was also
found that rainfed rice PVS trials
conducted using the mother-baby model
generate estimates of cultivar mean
yields with useful precision from testing
as few as five farms per cultivar.
Random cultivar x site x year interaction
was the most important source of
genotype x environment interaction
(GEI) in eastern Indian rainfed rice.
There was no evidence of village-specific
adaptation. This is consistent with on-
station research on GEI in rainfed rice,
which also indicates that cultivar x site x
year variances are the largest GEI
component. Cultivar x site interactions
appear to be rare across sites at similar
levels in the toposequence and within
geographic regions of the scale served by
the CRURRS and RAU breeding
programs. The effect of the large cultivar
x site x year component of the
phenotypic variance can be reduced, and
H concomitantly increased, by increasing
the number of sites and years of testing.
Because small rainfed rice breeding
programs often cannot easily increase the
number of sites they handle, they should
consider replication over years to increase
the precision of variety trials.
If variance among farms within villages
is large, simulation indicates that the
alpha-lattice designs can significantly
increase repeatability. Standardization
within farms was not effective in
increasing precision. Freely available,
easy to use software for the generation
and analysis of alpha-lattice designs is
needed by researchers from national
agricultural research programs if the
mother-baby design is to be widely and
effectively adopted.
References
Atlin, G.N., M. Cooper, and Å. Bjørnstad. 2001. A comparison of
formal and participatory breeding approaches using selection
theory.
Euphytica
122:463-475.
Atlin, G.N., and K.J. Frey. 1989. Breeding crop varieties for low-input
agriculture.
American Journal of Alternative Agriculture
4:53-
57.
Atlin, G.N., and K.J. Frey. 1990. Selecting oat lines for yield in low-
productivity environments.
Crop Science
30:556-561.
Courtois, B., B. Bartholome, D. Chaudhary, G. McLaren, C.H. Mishra,
N.P. Mandal, S. Pandey, T. Paris, C. Piggin, K. Prasad, A.T. Roy,
R.K. Sahu, V.N. Sahu, S. Sarkarung, S.K. Sharma, A. Singh, H.N.
Singh, O.N. Singh, N.K. Singh, R.K. Singh, R.K. Singh, S. Singh,
P.K. Sinha, B.V.S. Sisodia, and R.Thakur. 2001. Comparing
farmers and breeders rankings in varietal selection for low-input
environments: a case study of rainfed rice in eastern India.
Euphytica
122(3): 537-550.
Falconer, D.S. 1989.
Introduction to quantitative genetics
. 3rd Ed.
London: Longman.
Pederson, D.G., and A.J. Rathjen. 1981. Choosing trial sites to
maximize selection response for grain yield in spring wheat.
Australian Journal of Agricultural Research