Page 1

Genetic Epidemiology 34:299–308 (2010)

Detecting Interacting Genetic Loci with Effects on Quantitative Traits

Where the Nature and Order of the Interaction Are Unknown

Joanna L. Davies,1?Jotun Hein,1and Chris C. Holmes1,2

1Department of Statistics, University of Oxford, Oxford, United Kingdom

2MRC Harwell, Harwell Science and Innovation Campus, Oxfordshire, United Kingdom

Standard techniques for single marker quantitative trait mapping perform poorly in detecting complex interacting genetic

influences. When a genetic marker interacts with other genetic markers and/or environmental factors to influence a

quantitative trait, a sample of individuals will show different effects according to their exposure to other interacting factors.

This paper presents a Bayesian mixture model, which effectively models heterogeneous genetic effects apparent at a single

marker. We compute approximate Bayes factors which provide an efficient strategy for screening genetic markers (genome-

wide) for evidence of a heterogeneous effect on a quantitative trait. We present a simulation study which demonstrates that

the approximation is good and provide a real data example which identifies a population-specific genetic effect on gene

expression in the HapMap CEU and YRI populations. We advocate the use of the model as a strategy for identifying

candidate interacting markers without any knowledge of the nature or order of the interaction. The source of heterogeneity

can be modeled as an extension. Genet. Epidemiol. 34:299–308, 2010.

r 2009 Wiley-Liss, Inc.

Key words: Bayesian mixture model; gene-gene interaction; gene-environment interactions; Laplace approximation

Additional Supporting Information may be found in the online version of this article.

?Correspondence to: Joanna L. Davies, Department of Statistics, 1 South Parks Rd, University of Oxford, Oxford OX1 3TG, UK.

E-mail: davies@stats.ox.ac.uk

Received 11 May 2009; Revised 28 August 2009; Accepted 12 September 2009

Published online 18 December 2009 in Wiley InterScience (www.interscience.wiley.com).

DOI: 10.1002/gepi.20461

INTRODUCTION

Complex quantitative traits may be caused by multiple

interacting genetic, environmental and epigenetic factors.

A common strategy for identifying genetic factors affect-

ing a phenotype is to fit a model of association to the

phenotype separately at single genetic markers. This

approach is popular because it can be easily implemented

genome-wide and consequently it is now routinely done

with hundreds of thousands of single nucleotide poly-

morphisms (SNPs) or micro-satellite markers. However,

this approach performs poorly when the trait is also

influenced by other genetic markers and/or environmen-

tal factors. In such circumstances, at a single genetic locus

heterogeneous effects may be apparent according to the

exposure of individuals to other risk factors.

In this paper, we present a Bayesian Mixture model

which effectively models the genetic influence on a

quantitative trait by considering the data sample as the

union of two distinct groups of individuals; those for

whom there is a genetic effect on the quantitative trait and

those for whom there is no genetic effect on the trait. In

our modeling framework, there are three hypotheses to be

compared:

H0The null hypothesis: There is no association between

genotype at the genetic marker and the quantitative

trait for any individuals.

Hm The mixture hypothesis: There is an association

between genotype at the genetic marker and the

quantitative trait for a proportion of individuals due

to interaction with an unspecified source, the re-

maining proportion shows no effect.

H1 The homogeneous effect hypothesis: There is an

association between genotype at the genetic marker

and the quantitative trait for all individuals.

A motivating example is demonstrated in Figure 1. The

figure shows the quantitative trait along the y-axis

(which is a gene expression intensity in this instance)

against genotype at a single marker along the x-axis

(coded at zero, one or two according to the number of risk

alleles). The sample consists of individuals from the CEU

and YRI Hapmap populations; individuals are colored in

the figure according to their population background

(plotting characters are explained in the results but are

not important here). In this instance, it is clear that this

marker has an additive effect on the expression of the gene

for CEU individuals but not YRI individuals [Zhang et al.,

2008]. When the sample is considered as a whole, without

knowledge of the population background, heterogeneity is

apparent. In this example the source of the heterogeneity is

known and hence it can be used as a validating example.

In the model fitting and hypothesis testing framework we

describe, the labeling of individuals and the source of

heterogeneity is unknown but can subsequently be

inferred.

r 2009 Wiley-Liss, Inc.

Page 2

We present a way of efficiently fitting Bayesian models

for each of these hypotheses and compute approximate

Bayes factors and posterior model probabilities for

inference. Genetic markers for which there is strong

evidence in favor of the mixture model can be considered

candidates for being involved in an interaction (genetic

and/or environmental). Crucially, our model does not

depend on any prior knowledge of the nature or order of

the interaction. The probability of detecting an interacting

locus is dependent only on the total proportion of

individuals who exhibit the effect and the magnitude of

the effect. Consequently, the model can be used to identify

efficiently and effectively genetic candidates involved in

common high order interactions.

Existing strategies for identifying loci involved in

interactions are largely based on direct modeling of the

interaction. This is typically restricted to low order gene-

tic interactions, for example Marchini et al. [2005]

advocate exhaustive interaction modeling for pairs of

genetic markers. The magnitude of the problem increases

exponentially as the order of the interaction increases,

motivating the use of screening strategies to restrict the

number of interaction models considered, for example

Murcray et al. [2009] demonstrate that two-stage screen-

ing strategies can be more powerful for detecting gene-

environment interactions. However, such strategies can

only be implemented when data is available for the

appropriate interacting genetic marker or environmental

factor.

METHODS

THE MIXTURE MODEL

We propose a mixture model to describe the behavior of

a quantitative trait with respect to a single genetic locus. At

each locus the sample of individuals is assumed to be

composed of two groups; those for whom the variability of

the quantitative trait is associated with genetic variation

and those for whom it is not. Within the group of affected

individuals it is assumed that the genetic effect is

homogeneous. The type of the genetic effect fitted for

affected individuals can be unconstrained in the most

general case (fitting a separate coefficient for each

genotype) or constrained for dominant, recessive or

additive modes of inheritance. In this paper we describe

the additive model, but dominant and recessive modes of

inheritance can be fitted in the same way by recoding

genotypes appropriately. Unconstrained models can be

fitted in a similar way with sufficient data.

The quantitative trait is treated as a response variable to

which a mixture of linear regressions is fitted. An

indicator variable Z is introduced to denote the group to

which individuals belong. It is a vector of binary variables

where the ith component takes the value 1 if the

phenotype of the ith individual is associated with the

genetic variant at the marker and zero otherwise. Hence,

the regression can be expressed according to Equation (1).

Throughout, subscript i corresponds to the ith individual

and takes values in f1;...;ng where n is the total number

of individuals. X;Y;Z are vectors each of length n

denoting the genotype, the quantitative phenotype and

the group labeling of all the individuals in the study

respectively. e is a vector of independent and identically

distributed random error terms, in particular, the error

distribution is the same irrespective of the group label and

genotype.

Yi¼ ð1 ? ZiÞa1Ziðb01b1XiÞ1ei:

parameters

a;b0

E½YijZi¼ 0?, b0¼ E½YijZi¼ 1;Xi¼ 0? and b1is the size of

the additive genetic affect attributable to the presence of a

single risk allele for affected individuals. Implicit in this

interpretation of parameters is the assumption that

E½eijXi;Zi? ¼ E½ei? ¼ 0 for all Xi2 f0;1;2g and Zi2 f0;1g.

ð1Þ

a ¼

The

and

b1

areunknown.

DISTRIBUTIONAL ASSUMPTIONS

The distribution of the errors defines the form of the

distribution of the quantitative trait for each individual. It

is assumed that errors ei (i 2 f1;...;ng) are independent,

identically and normally distributed with expectation zero

and constant variance t?1(independent of group labeling

and genotype). Furthermore, a priori, it is assumed that Zi

is a Bernoulli variable with parameter h, i.e. Zi¼ 1 with

probability h and Zi¼ 0 with probability 1 ? h. Under

these assumptions and using f0ð?;a;tÞ to denote the normal

density function with mean a and variance t?1and

f1ð?;b0;b1;x;tÞ to denote the normal density function with

mean b01b1x and variance t?1the likelihood of the

012

–4

–3

–2

–1

0

1

Genotype (with jitter)

Quantitative Trait (SCP2)

Fig. 1. An example which clearly demonstrates the mixture

hypothesis. The sample consists of 87 CEU-related individuals

(mother, father, child) and 89 related YRI individuals (also trios)

and they are plotted in red and blue respectively. The

significance of the plotting character is described in the results.

The genetic marker is located close to the gene SCP2 and is

associated with the expression of SCP2 in the CEU population

but not in the YRI population. Individuals in the sample can be

categorized into two groups according to whether or not they

show the genetic effect and in this example, the group labeling

is known a priori and can be labeled according to population

background. The dashed lines illustrate the estimated effects

after fitting the mixture model using the procedure described in

the following section.

300Davies et al.

Genet. Epidemiol.

Page 3

observed data can be written as

Lðy;x;h;a;b0;b1;tÞ

Y

Vectors x, y and z are used to denote observations of the

random vectors X, Y and Z, respectively. To summarize,

the quantitative trait is modeled using a normal mixture

model with two groups and a total of five unknown

parameters; the effect parameters a, b0and b1, the mixture

parameter h and the variance parameter t?1.

¼

n

i¼1

½hf1ðyi;b0;b1;xi;tÞ1ð1 ? hÞf0ðyi;a;tÞ?:

ð2Þ

PRIOR ELICITATION

Specifying priors is a subjective task. In general, priors

should reflect the users prior beliefs, and have good

operating characteristics. In this section we describe priors

for our model, which could be used by default in the

absence of other information, but recommend that users

adjust priors according to context-specific information.

We need priors which can be used with computationally

efficient algorithms to enable the model to be fitted to

genetic markers genome-wide. As is standard practice and

documented in Bayesian linear model literature [Lindley

and Smith, 1972; O’Hagan, 1994] we assume conjugate

priors (P1-P2) to aid this objective. The prior on ða;b0;b1Þ is

conditional on t?1and although this might not be consi-

dered a natural way to reflect prior beliefs, it is adopted for

practical reasons and has the added benefit that posterior

inference is independent of the units of measurement. If

independent priors on ða;b0;b1Þ and t?1are preferred,

they can be separated although this is at the cost of the

efficiency of the optimization algorithms used to locate the

modes of the posterior parameter distribution, and care

must also be taken to ensure scaling is appropriate.

P1 Beta prior on the proportion of individuals for whom

there is an association between the marker and the

quantitative trait, i.e. h ? Bðl1;l2Þ with prior density

pðhÞ ¼hl1?1ð1 ? hÞl2?1

Bðl1;l2Þ

:

P2 Normal Inverse-Gamma prior on the effect and

variance parameters i.e.

pða;b0;b1;t?1Þ ¼ pða;b0;b1jt?1Þpðt?1Þ

ða;b0;b1Þjt?1? MVN 0;K

and;

tI3

??

;

t ? Gammaðd=2;a=2Þ:

Conjugate priors (P1-P2) are fully parametrized by

hyper-parameters l1, l2, K, a, d. Specification of hyper-

parameters can be done with or without context-specific

information. We propose a uniform distribution on (0,1)

for the proportion parameter h, i.e. l1¼ l2¼ 1 and we

conducted extensive sensitivity study (Appendix A) with

data sets simulated with different values of h to establish

an appropriate range of values for the remaining hyper-

parameters which do not adversely affect inference. For

each value of h we simulated 1000 data sets and fitted the

models using priors with K ¼ 20, K ¼ 100, K ¼ 1;000; for

each of these values of K we investigated the hyper-

parameters a ¼ d ¼ 0:2, a ¼ d ¼ 0:02 and a ¼ d ¼ 0:002.

Since posterior inference is made on the basis of Bayes

factors, we compare the distributions of log Bayes factors.

A selection of these results is presented in Appendix A

(Supplementary Figs. 1–3). The study shows that for all

values of h, Bayes factors are insensitive to the prior

specification on a and d in the range (0.002, 0.2). When h is

very large or very small, there is sensitivity to the value

of K and as expected, larger values of K provide stronger

evidence for the null when h is small (Supplementary

Fig. 1). Consequently, a value of K ¼ 1;000 might result in

subtle effects being missed. For values of h in the mid-

range, the distributions are stable with K (Supplementary

Fig. 2). As h gets large, the distributions with K ¼ 100 and

K ¼ 1;000 concur (Supplementary Fig. 3) although there is

variation evident when K ¼ 20. For this reason we would

suggest as a default K is set to 100 and a and d are both set

to 0.02. The results we present in this paper are conducted

with these priors.

If there is knowledge about the degree of heterogeneity

which is likely to be present in the sample, the uniform

prior on h can be adjusted with hyper-parameters set to

yield the prior expected value with to reflect the

uncertainty associated with these beliefs. However, this

might introduce bias where there is heterogeneity present

but due to different reasons than those suspected a priori.

In particular, proportions are likely to vary at different

markers according to their involvement in environmental

or genetic interactions, for this reason we would always

recommend setting l1¼ l2¼ 1.

Heritability estimates for a trait could be used to set the

hyper-parameters a and d of the Gamma distribution on

t?1by making some basic assumptions about the propor-

tion of the variance attributable to genetics which can be

explained by a single marker. For example if the

heritability of a trait is 0.2 and it is assumed that at most

a single combination of interactions can explain 10% of

heritability, then, the gamma prior for t might be centered

such that d=a 51/(Total variance?(0.02?Total variance))

with the magnitude of the parameters selected to reflect

the uncertainty associated with heritability estimates

and the assumptions made about how heritability are

explained by individual sets of interactions.

MODEL FITTING AND SELECTION

The classical approach to model fitting and model

selection is via maximum likelihood parameter estimation

and maximum likelihood ratio testing respectively. Al-

though maximum likelihood parameter estimation can be

efficiently implemented using the expectation maximiza-

tion (EM) algorithm [Dempster et al., 1977] by treating the

group allocations Z as missing data, maximum likelihood

ratio testing is not an effective strategy for model selection

in this setting. We need to compare and test the three

hypotheses H0, Hmand H1; each pair of models are nested

which allows pairwise model comparisons to be made but

the hypotheses H0, Hm and H1 are non-nested when

considered jointly, consequently there is no coherent way

of testing the three hypotheses jointly in this framework.

Furthermore, classical likelihood ratio test statistics do not

incorporateuncertaintyassociatedwithparameterestimates.

Consequently, tests of association for genetic markers with

differentminorallelefrequenciesand/orsamplesizescannot

be compared directly.

The Bayesian approach to model fitting and selection

directly models parameter and model uncertainty using

301Quantitative Trait Mapping with Heterogeneity

Genet. Epidemiol.

Page 4

probability distributions [Bernado and Smith, 1994]. It

provides an alternative and coherent framework for

hypothesis testing H0;Hm;H1which can be used to directly

compare tests at different genetic markers irrespective of

the sample size and minor allele frequency [Wakefield,

2008]. The Fully Bayesian approach to parameter estima-

tion involves inferring the posterior parameter distribution

and examining its properties. For the mixture model, the

exact posterior parameter distribution is not tractable,

thereby necessitating the use of sampling techniques such

a Gibb’s sampling or Metropolis Hastings (Appendix B).

These techniques can be implemented easily but they are

costly computationally and not practical to implement

genome-wide. Instead we suggest using the EM algorithm

to find the mode of the posterior parameter distribution

(Appendix C).

Uncertainty surrounding which hypothesis H0;Hm;H1is

supported best by the data is expressed using probabil-

ities. Prior probabilities for each hypothesis being correct

can be assigned i.e. PðH0Þ, PðHmÞ, PðH1Þ where PðH0Þ1

PðHmÞ1PðH1Þ ¼ 1. Alternatively, this information can be

represented using the odds of one hypothesis relative to

another. Using pjkto denote the odds of model j relative to

model k

pjk:¼PðHjÞ

PðHkÞ:

Posterior model probabilities and odds of models given

observed data can be used to assess model fit and are

computed from the prior odds and model probabilities

using Bayes Theorem as demonstrated by Equations (3)

and (4). Ojkdenotes the posterior odds of model j relative

to model k given the data.

PðHkjyÞ ¼

fðyjHkÞPðHkÞ

j2f0;1;mgfðyjHjÞPðHjÞ;

P

ð3Þ

Ojk¼PðHjjyÞ

PðHkjyÞ¼fðyjHjÞPðHjÞ

fðyjHkÞPðHkÞ:

ð4Þ

The quantity fðyjHjÞ=fðyjHkÞ defines the Bayes factor for

model j relative to model k; it is the ratio of marginal

likelihood of the data under model j relative to that under

model k. In the case that prior odds are set to 1, the Bayes

Factor is exactly the posterior odds, but more generally the

posterior odds is the Bayes factor multiplied by the prior

odds.

Bayes Factors [Kass and Raftery, 1995] are frequently

used to assess model fit and they automatically account for

the complexity of each model because the computation of

the marginal likelihood for each model involves integra-

tion over the model parameter space which is larger for

complex models. Posterior model probabilities (easily

computed from Bayes factors) can also be used to assess

which hypothesis is best supported by the data. They have

a natural interpretation and can be computed to compare

the fit of multiple different models contrary to likelihood

ratios or odds which make direct comparisons between

two models. Consequently in this setting, they provide a

coherent way of assessing the three hypotheses H0, Hmand

H1. We recommend ranking heterogeneous candidates

based on the posterior probabilities for Hm, the mixture

hypothesis.

COMPUTING APPROXIMATE BAYES FACTORS

Computation of Bayes factors and posterior model

probabilities requires calculation of the marginal like-

lihood fðyjHkÞ for each model k 2 f0;m;1g. Using yk to

denote the set of parameters for model k the marginal

likelihood is defined as

fðyjHkÞ ¼

Z

yk

fðyjykÞpðykÞ:

ð5Þ

This quantity is analytically tractable for both the null and

the homogeneous effect models (Appendix D), but not for

the mixture model. There are several approaches which

can be used to approximate the marginal likelihood of the

data under the mixture including naive sampling from the

prior, importance sampling (on either parameters or class

labels in this instance) and Laplace’s method.

Sampling based approaches are computationally inten-

sive (Appendix E1–E3) and cannot be implemented

genome-wide. The Laplace approximation [Bernado and

Smith, 1994] is based on the computation of the Hessian

matrix corresponding to the posterior parameter density

evaluated at the posterior mode. Provided the posterior

mode is not a poor local mode and the posterior density is

sufficiently peaked at this mode, it provides an accurate

way of efficiently approximating the marginal likelihood

which can be implemented genome-wide. Finding the

mode of the posterior parameter density is crucial for an

accurate approximation. For efficiency we use the EM

algorithm and recommend that several initializations are

used to guard against using poor local modes in the

evaluation of the Hessian. Alternatively a more computa-

tional but reliable technique is to sample parameters from

the posterior distribution and to estimate the posterior

mode empirically from these samples.

Laplace’s method (Appendix E4) can be effectively used

to provide accurate estimates of the marginal likelihood

under the mixture model provided the sample size is

sufficiently large. Using this approximation of the margin-

al likelihood under the mixture model, we compute

approximate Bayes factors and use them to assess which

hypothesis is best supported by the data by reporting

posterior probabilities for each hypothesis being true.

GENOME-WIDE IMPLEMENTATION

Computational time.

to fit the models per SNP varies according to the con-

vergence time of the EM algorithm, which itself depends

on the degree and strength of signal present in the data.

Supplementary Figure 4 illustrates this with the distri-

bution of time taken to fit the models to a single mar-

ker across the range of simulations we performed (see

Results). Genome-wide we would expect most markers to

show no association with any individuals (i.e. H0fits the

data well). The median time taken to fit the model to non-

associated SNPs is 0.026s. Allowing for some markers to

take longer than this time, we would expect a genome-

wide screening with 500,000 markers to take approxi-

mately 4hours on a single processor (Intel Xeon 1.6GHz).

Time taken scales linearly according to the number of

initializations of the EM-algorithm. The implementation

can be easily parallelized if additional speed is required.

The implementation we have developed is coded in R

The amount of time taken

302 Davies et al.

Genet. Epidemiol.

Page 5

and is available upon request from the corresponding

author.

Validating the Laplace approximation.

sults showed that the even with a sample size of less

than 200 individuals, the Laplace approximation to the

marginal log likelihood of the mixture model is good.

However, the validity of the Laplace approximation can be

investigated on a subset of markers using importance

sampling (Appendix F). The subset of markers can be

selected after fitting the approximation to all markers to

ensure that a full range of Bayes factors (for the mixture

model with respect to the null model) are validated.

Initializing the EM-algorithm.

the EM-algorithm is that it can converge to local rather

than global modes. In practice our simulations showed that

the posterior density was not highly multi-model although

this might not be the case with different data sets. The

recommended initialization outlined in Appendix C nearly

always provided the parameter estimates which corres-

ponded to the maximal mode for our simulations but we

would recommend that at least two initializations should

converge to the same parameter estimates which yield the

maximum marginal likelihood.

Our re-

A major caveat of

RESULTS

SIMULATION STUDY

We validate the modeling approach and investigate the

magnitude of effects necessary for the mixture model to

identify accurately heterogeneity by simulating data under

a range of conditions. In each of the simulations we

generate (independently for each marker) genotypes for

1,000 unrelated individuals under the assumption of

Hardy-Weinberg equilibrium with minor allele frequency

of 0.2. Heterogeneity is introduced by simulating group

allocations of individuals independently of genotype. This

is done by simulating Bernoulli random variables with

parameter h, the true proportion of affected individuals in

the sample. The value of the quantitative trait is simulated

from a normal distribution with standard deviation 0.5

and location parameter conditional on the group allocation

and genotype. We investigate a range of values of h and a

range of different genetic effects. The error standard

deviation is constant across all simulations and genetic

effect sizes are parametrized in terms of this quantity. We

investigate effects where there is a shift of one standard

deviation in the baseline (i.e. b0¼ s) for affected indivi-

duals and where there is not (i.e. b0¼ 0). We investigate

three different additive effects; namely b1¼ 0:5s;s;2s

and refer to them as small, medium and large effects

respectively. Examples of data sets simulated with these

parameters are illustrated in Figure 2.

On each of the simulated data sets, we use the EM

algorithm (Appendix C) to locate posterior modes and

compute Bayes Factors with respect to the null hypothesis

using both the Laplace approximation (Appendix E4) and

importance sampling (Appendix E3) to estimate the

marginal likelihood under Hm. Under the assumption that

a priori all three hypotheses are equally likely, we use the

Bayes Factors to compute posterior probabilities for each

hypothesis being true.

Although we recommend using posterior probabilities

to rank markers as candidates for showing heterogeneity,

to illustrate the performance of the model with respect to

simulated data sets, we use a threshold approach to accept

a hypothesis. On this basis we generate probability curves

(Fig. 3) for accepting each model as a function of the

proportion of truly affected individuals in the sample.

These curves are analogous to power curves which are

widely reported in literature. Probabilities are estimated

for each parameter set using 1,000 simulations and by

defining the probability that a model is accepted to be the

proportion of simulated data sets for which the posterior

model probability exceeds the acceptance threshold. The

thresholds for accepting Hmand H1are set on the basis of

the empirical distribution of posterior probabilities for Hm

and H1 when data is simulated under H0 (i.e. the 95th

percentile of these distributions). A threshold for accepting

the null model can be specified in a similar way by

defining it such that 5% of null data sets would be

incorrectly rejected (i.e. the 5th percentile of the distribu-

tion of posterior probabilities for the null model when data

is simulated under the null model). This corresponds to a

threshold of 0.9. Note that the probability curves need not

be strictly monotonic increasing (or decreasing for the null

model) as functions of h because there are three possible

model choices.

Figure 3 columns A, B and C illustrate the probability of

accepting H0, Hmand H1, respectively with genetic effects

excluding and including a shift in baseline (row one and

row two respectively). The figure can be interpreted to

answer the following questions,

1. Is there strong evidence to support H0when H0is true?

2. Is there strong evidence to support H1when H1is true?

3. When is there strong evidence to support Hm? In

particular how large do genetic effects need to be, and

in what proportion do they need to be present in the

sample before heterogeneity can be detected with

moderate probability?

Figure 3 column (A) shows that when the modeling

procedure is applied to data sets simulated under H0, there

is strong evidence in favor of H0 with the posterior null

model probabilities exceeding 0.9 in 95% of cases. The

probability of accepting H0decreases as the proportion of

affected individuals increases and this is to be expected as

the data sets are simulated from a true model which is

increasingly different from the null in both cases (with and

without a shift in the baseline). The rate of decrease

depends on the size of the effect. For large effects the

decrease is rapid, as the data departs further from the null.

For small effects the decrease is gradual such that

approximately 60% of individuals need to be affected by

the marker for the posterior probability for H0to fall below

0.9 and hence for H0to be rejected.

The probability curves for accepting the mixture model

(Fig. 3; column B) rise from 0.05 for h ¼ 0 (by definition of

the threshold), increase for intermediate values of h and

decrease as h approaches 1. The rate at which the

probability increases and the value to which it rises

depends on the additive effect size and whether or not

there is a shift in the baseline. For large effects probabil-

ities quickly approach 1, even for small and large

proportions, whereas for a small effect size with no shift

in the baseline, the rate at which probability of accepting

the mixture increases/decreases as a function of h is

smaller and even when there is an equal proportion of

303Quantitative Trait Mapping with Heterogeneity

Genet. Epidemiol.