Much of the literature on meta-analysis deals with analyzing effect sizes obtained from k independent studies in each of which a single treatment is compared with a control (or with a standard treatment). Because the studies are statistically independent, so are the effect sizes. Studies, however, are not always so simple. For example, some may compare multiple variants of a type of treatment against a common control. Thus, in a study of the beneficial effects of exercise on blood pressure, independent groups of subjects may each be assigned one of several types of exercise: running for twenty minutes daily, running for forty minutes daily, running every other day, brisk walking, and so on. Each of these exercise groups is to be compared with a common sedentary control group. In consequence, such a study will yield more than one exercise versus control effect size. Because the effect sizes share a common control group, the estimates of these effect sizes will be correlated. Studies of this kind are called multiple-treatment studies. In other studies, the single-treatment, single-control paradigm may be followed, but multiple measures will be used as endpoints for each subject. Thus, in comparing exercise and lack of exercise on subjects' health, measurements of systolic blood pressure, diastolic blood pressure, pulse rate, cholesterol concentration, and so on, may be taken for each subject. Similarly, studies of the use of carbon dioxide for storage of apples can include measures of flavor, appearance, firmness, and resistance to disease. A treatment versus control effect-size estimate may be calculated for each endpoint measure. Because measures on a common subject are likely to be correlated, corresponding estimated effect sizes for these measures will be correlated within studies. Studies of this type are called multiple-endpoint studies (for further discussions of multiple-endpoint studies, see Gleser and Olkin 1994; Raudenbush, Becker, and Kalaian 1988; Timm 1999). A special, but common, kind of multiple-endpoint study is that in which the measures (endpoints) used are sub-scales of a psychological test. For study-to-study comparisons, or to have a single effect size for treatment versus control, we may want to combine the effect sizes obtained from the subscales into an overall effect size. Because subscales have differing accuracies, it is well known that weighted averages of such effect sizes are required. Weighting by inverses of the variances of the estimated subscale effect sizes is appropriate when these effect sizes are independent, but may not produce the most precise estimates when the effect sizes are correlated. In each of these above situations, possible dependency among the estimated effect sizes needs to be accounted for in the analysis. To do so, additional information has to be obtained from the various studies. For example, in the multiple-endpoint studies, dependence among the end-point measures leads to dependence between the corresponding estimated effect sizes, and values for between-measures correlations will thus be needed for any analysis. Fortunately, as will be seen, in most cases this is all the extra information that will be needed. When the studies themselves fail to provide this information, the correlations can often be imputed from test manuals (when the measures are subscales of a test, for example) or from published literature on the measures used. When dealing with dependent estimated effect sizes, we need formulas for the covariances or correlations. Note that the dependency between estimated effect sizes in multiple-endpoint studies is intrinsic to such studies, arising from the relationships between the measures used, whereas the dependency between estimated effect sizes in multiple-treatment studies is an artifact of the design (the use of a common control). Consequently, formulas for the covariances between estimated effect sizes differ between the two types of studies, necessitating separate treatment of each type. On the other hand, the variances of the estimated effect sizes have the same form in both types of study - namely, that obtained from considering each effect size in isolation (see chapters 15 and 16, this volume). Recall that such variances depend on the true effect size, the sample sizes for treatment and control, and (possibly) the treatment-to-control variance ratio (when the variance of a given measurement is assumed to be affected by the treatment). As is often the case in analyses in other chapters in this volume, the results obtained are large sample (within studies) normality approximations based on use of the central limit theorem.