Content uploaded by Christopher M Holbrook

Author content

All content in this area was uploaded by Christopher M Holbrook on Sep 29, 2014

Content may be subject to copyright.

453

Using Mark-Recapture Models to Estimate

Survival from Telemetry Data

Russell W. PeRRy, TheodoRe CasTRo-sanTos, ChRisToPheR

M. holbRook, and benjaMin P. sandfoRd

Section 9.2

INTRODUCTION

Analyzing telemetry data within a mark–recapture framework is a powerful approach for

estimating demographic parameters (e.g., survival and movement probabilities) that might

otherwise be difcult to measure. Yet many studies using telemetry techniques focus on sh

behavior and fail to recognize the potential of telemetry data to provide information about sh

survival. The sophistication of both mark–recapture modeling and telemetry has dramatically

improved since the 1980s, largely due to technological advancements in computing power

(for mark–recapture models) and electronic components (for telemetry). Such advances now

allow mark–recapture models to take advantage of the detailed information that telemetry

techniques can provide.

The key feature of mark–recapture models is simultaneous estimation of detection and

survival probabilities. With telemetry, a “capture” event consists of detecting a given tag

code one or more times at a specic location or time. By contrast, in some studies interest

may focus on the probability of detecting a single tag transmission (see Sections 7.2 and

9.1). Compared to conventional mark and recapture methods, telemetry methods often have

greater detection probabilities due to large detection ranges, increased “effort” (i.e., continu-

ous monitoring with autonomous receivers), and ability to simultaneously monitor multiple

locations. Nonetheless, perfect detectability is rare in telemetry studies because both random

(e.g., from electronic noise) and nonrandom processes (e.g., receiver loses power temporar-

ily) can allow a sh to pass a receiver undetected. Failure to account for imperfect detection

can lead to serious bias in survival estimates. When using telemetry to estimate survival, it

is therefore critical to explicitly estimate detection probabilities to ensure unbiased estimates

of survival (see Section 7.2). Fortunately, using telemetry techniques and mark–recapture

models together yields the best of both worlds: Well-designed telemetry systems deliver high

detection probabilities that result in precise estimates from small sample sizes. Mark–recap-

ture models ensure estimates of the demographic parameters are unbiased with respect to the

detection process.

454

Section 9.2

In this Section, we illustrate the exibility with which mark–recapture models and telem-

etry systems can be tailored to unique situations in aquatic environments. We describe simple

to complex mark–recapture models and the associated telemetry system design needed to sup-

port each model. We simulate a tagged population of sh migrating downstream past a dam

and apply different models to this simulated data set. Although we present a specic applica-

tion, the general framework and exibility of using mark–recapture models with telemetry

data applies to many situations. We then discuss the assumptions associated with each model,

and give examples showing how assumptions can be violated. Since our goal is to introduce

the application of mark–recapture models to telemetry data, we do not present a complete

treatment of the statistical theory or software used to implement mark–recapture models. This

Section will be useful to readers familiar with either telemetry or mark–recapture models to

design telemetry studies within a mark–recapture framework.

Telemetry system design (e.g., the spatial arrangement of antennas or hydrophones) dic-

tates the parameters that can be estimated using a mark–recapture model. More complex mark–

recapture models require more detailed information from a telemetry system. Therefore, study

design is crucial to successfully apply mark–recapture models to telemetry data and involves

1) identifying the demographic parameters of interest, 2) designing a mark–recapture model

to estimate these parameters, and 3) implementing a telemetry system that provides detection

data required by the mark–recapture model. In our experience, many telemetry studies fail to

follow these important steps. Therefore, the organization of this Section emphasizes the critical

linkage between telemetry system design and mark–recapture model design.

A SIMULATED FISH PASSAGE STUDY

As described in Section 9.1, we simulated a sh passage study and then applied both time-

to-event and mark–recapture analyses to the same simulated data set. Section 9.1 describes the

simulation from the perspective of time-to-event analysis; here we describe details of the simu-

lation relating to mark–recapture modeling. In summary, simulated tagged sh were released

upstream of a dam, arrived at the dam over time during experimental treatments consisting of

either a low spill or high spill dam operation, and then migrated downstream to the ocean. The

proportion of tagged sh passing through the dam’s turbines and spillway arises as a function

of 1) treatment-specic rates of passage, and 2) arrival timing in the forebay (see Section 9.1).

Survival was simulated as a function of three different mortality processes occurring at

different locations in the study area: 1) mortality upstream of the dam, which was a function

of residence time in the forebay, 2) mortality as a result of passing through either turbines or

spillway, and 3) mortality between the dam and river mouth at ocean entry. Specically, mor-

tality upstream of the dam is determined by time-dependent mortality in the forebay relative

to the amount of time required to pass the dam (see Section 9.1). The probability of surviv-

ing dam passage was set at 0.70 for the turbines and 0.90 for the spillway. Survival from the

dam to the river mouth declined exponentially over time as S = e

–rt

where r = 0.005 and t is

the date of dam passage. This pattern was selected to simulate the effects of migratory delay

on physiological and ecological preparedness for the marine environment (McCormick et al.

1998). Fates of individual sh were simulated by using random number generators to draw

Bernoulli outcomes (i.e., we “ipped a coin”) with associated survival probabilities (i.e., a

probability of “heads”).

455

Estimating Survival Using Telemetry

As described in Section 9.1, 300 sh were released upstream of the dam, but we also sim-

ulated a release of 300 tagged sh in the tailrace corresponding to release times upstream of

the dam. Fish released in the tailrace were considered a “control” group because their survival

between the dam and river mouth was inuenced only by in-river mortality processes (i.e.,

day of year). In contrast, for sh passing the dam, survival between the dam and river mouth

was inuenced by both mortality due to dam passage and mortality due to in-river processes.

The telemetry system was simulated to detect sh in the forebay, within each passage

route (turbines and spillway), in the tailrace just downstream of the dam, and at the river

mouth to detect sh that survived to the ocean (see Section 9.1 for details). Detection prob-

abilities often vary among monitoring sites due to variable effort (e.g., number and detection

range of antennas) and site conguration (e.g., channel width). Therefore, we assigned differ-

ent detection probabilities to each monitoring site. The probability of detecting a tagged sh

was set at 0.95 for the turbines, 0.75 for the spillway, 0.80 for the tailrace, and 0.70 at the river

mouth. As with survival, detections of individual sh were simulated as random draws from

a Bernoulli process with associated detection probabilities. Detection probabilities in actual

eld studies may be higher or lower than those simulated here.

By simulating the data, we know exactly the survival, passage, and detection fates of each

individual. Therefore, by comparing estimates from the mark–recapture model to the calcu-

lations based on the known fates, we can verify that the estimated parameters are unbiased.

APPLYING MARK-RECAPTURE MODELS TO THE SIMULATED DATA SET

Given the telemetry system deployed to detect sh in our simulated study, a number of

different mark–recapture models could be t to these data. It is important to recognize that the

complexity of any mark–recapture model is limited by the complexity of the telemetry system

used to monitor the tagged population. Thus, models may be only as complex as the telemetry

system allows, but may be simplied as desired by pooling or ignoring detection data appro-

priately. We illustrate this concept by rst tting simple mark–recapture models to telemetry

data and showing which parameters can be estimated. We then show how adding information

from the telemetry system allows for more complex models with additional parameters.

THE CORMACK–JOLLY–SEBER MODEL

The Cormack–Jolly–Seber (CJS) model has been the foundation of mark–recapture mod-

eling for over forty years (Cormack 1964; Jolly 1965; Seber 1965). Two types of parameters

are estimated by the CJS model: S

i

, the probability of surviving from telemetry station i to i +

1, and p

i

the probability of being detected at station i conditional on surviving to station i. For

our simulated telemetry study, a CJS model can be constructed using detections at two sites:

one at the dam (combining detections within the spillway and turbines) and one at the river

mouth (Figure 1). This is the minimum telemetry system design that allows us to obtain any

estimate of survival, and for the moment, we ignore information from other telemetry stations

in our study. As will be seen, however, the CJS model for this telemetry design reveals little

about our simulated population.

Relating the CJS parameters to our study, S

1

estimates the probability of sh surviving

from the release point to a passage route entrance (spillway or turbines), and S

2

represents

456

Section 9.2

survival of sh from a passage route entrance to the river mouth. For detection, p

2

represents

the average detection probability of both passage routes (spillway and turbines), and p

3

rep-

resents detection probability at the river mouth (Figure 1). By convention, the release site is

considered the rst detection location and p

1

is set to 1.

With the telemetry design for this model, we cannot estimate all of the parameters just

described. Detection probability at telemetry station i is estimated from detections at loca-

tions downstream of station i. Therefore, the detection probability at the last station cannot be

estimated. Since detection probability is needed to estimate survival, survival alone cannot be

estimated through the nal reach. The best we can do is to estimate λ = S

2

p

3

, the joint prob-

ability of surviving and being detected at the river mouth. This confounding of the parameters

limits inferences about survival in the nal reach.

Maximum Likelihood Estimation

Here, we briey present the statistical theory used to estimate the parameters of all mark–

recapture models, using our simple CJS model as an example. Most mark–recapture models

are based on the multinomial distribution, a probability distribution describing the probability

of occurrence of events falling into discrete categories. In our case, the categories are detec-

tion histories of each sh’s migration through the study area. Detection histories are com-

posed of an alpha-numeric string describing whether a sh was detected at each location in

the study area. For the CJS model, a “1” means a sh was detected at a given location and a

“0” means it was not detected. For the simple CJS model above, a “101” history means the

sh was released upstream of the dam (“1”), not detected in either the spillway or turbines

(“0”), but subsequently detected at the river mouth (“1”).

The set of all possible detection histories forms the categories of a multinomial distribu-

tion. In turn, each detection history has a probability of occurrence that depends on the un-

derlying survival and detection probabilities (Table 1). For example, for the detection history

“101” to have occurred, a sh must have survived to a passage route (S

1

), not been detected in

Release

S

1

S

2

p

2

p

3

Dam

River mouth

Figure 1. Schematic of CJS model showing survival (S

1

and S

2

) and detection probabilities (p

2

and p

3

)

that can be estimated from the telemetry arrays installed at the dam and river mouth. In a CJS model, only

the joint probability, λ = S

2

*p

3

can be estimated in the last reach. For this example, p

3

is estimated from

a separate model for replicate arrays (described in detail below). Given an estimate for p

3

, S

2

can be

obtained as λ/p

3

.

457

Estimating Survival Using Telemetry

either passage route (1 – p

2

), survived to the river mouth (S

2

), and detected at the river mouth

(p

2

). Thus the probability of observing detection history “101” is the product of these event

probabilities: S

1

* (1 – p

2

) * S

2

* p

3

= S

1

* (1 – p

2

) * λ.

For any given data set of detection histories, there is a unique set of parameter values that

is most likely to have produced the observed detection history frequencies. To nd the most

likely parameter values, the next step in mark–recapture analysis is to dene the likelihood

function. For multinomial models, the likelihood takes the form

()

|,

i

n

ii

i

LnR

π

∝

∏

θ

where

()

|,

i

n

ii

i

LnR

π

∝

∏

θ

is the likelihood of the parameters (θ) given the data (n

i

, R): π

i

is the prob-

ability of observing each detection history, n

i

is the number of sh with each detection history,

and R is the total number of sh released. Given the probabilities of each detection history and

the frequencies of occurrence in the simulated data set (Table 1), the full likelihood function

for this example is:

()()()

()

()

()

()()( )

( )

11 18

35

2

54

12 12 12 11 2

|, 11111

i

LnRSpSpSpSSp

λλ

λλ

∝− −−+− −θ

The likelihood function yields the relative likelihood of one set of parameter values over

another for a given data set of detection histories. By calculating and comparing likelihoods

among different values for S

1

, p

2

, and λ, we can identify the most likely parameter set for the

data. These “maximum likelihood estimates” will be closest to the true values of the param-

eters.

In practice, two approaches are used to nd the maximum likelihood estimates: 1) de-

riving equations that provide the maximum likelihood estimates (“analytical” approach),

and 2) using an optimization routine (e.g., Newton-Raphson method) that will system-

atically calculate the likelihood of different parameter values to nd the set of param-

eter values that maximizes likelihood (“numerical” approach). The two methods are akin

to estimating the mean using Σx

i

/n (the “analytical” approach) or by using least-squares

methods (i.e., the value of the mean that minimizes the total sum of squares—the “numeri-

cal” approach). The numerical approach is by far the most common means of obtaining

maximum likelihood estimates because analytical expressions for maximum likelihood

estimates do not exist for many mark–recapture models. However, both approaches can be

used for the CJS model. We adapted the maximum likelihood estimators of the CJS model

from Seber (1982) and Burnham et al. (1987) and present those equations here to illustrate

both approaches.

Table 1. All possible detection histories, their probability of occurrence (π

i

), and the number of sh with

each capture history in the simulated data set (n

i

) for our simple CJS model.

Capture history Probability of occurrence (π

i

) n

i

111 S

1

*p

2

*λ 54

101 S

1

*(1 – p

2

)*λ 11

110 S

1

*p

2

*(1 – λ) 183

100 (1 – S

1

) + S

1

*(1 – p

2

)*(1 – λ) 52

458

Section 9.2

The maximum likelihood estimators for detection probability (p

i

), survival probability

(S

i

), and joint survival-detection probability (λ) of the CJS model are:

ˆ

i

i

ii

r

p

rz

=

+

1

1

ˆ

ˆ

ˆ

i

i

i

M

S

M

−

−

=

()

ˆ

ˆ

ii i

i

i

ii

mr z

m

M

rp

+

==

1

1

ˆ

k

k

r

m

λ

−

−

=

Where variables are dened as follows:

i = detection site, i = 1, 2, …, k, numbered from upstream to downstream (i = 1 is the release

site, and k = 3 in our simple CJS model).

m

i

= number of sh detected at site i.

r

i

= number of sh detected downstream of site i (i.e., at sites i + 1…k), of those detected at

site i.

z

i

= number of sh not detected at site i, but detected downstream of site i.

Note that M

1

= R (the number of sh released), p

1

= 1, and r

i

+ z

i

= total number of sh

detected downstream of site i and therefore, known to be alive at site i.

The maximum likelihood estimators for the CJS model make intuitive sense. The detec-

tion probability is the fraction of sh detected at site i of those known to have migrated past

site i. The parameter

ˆ

i

M

estimates the number of sh alive at site i, which is the number sh

detected at site i divided by the detection probability at site i. The survival probability from

site i–1 to site i is the estimated number of sh alive at site i divided by the estimated number

of sh alive at the previous site, i–1. These maximum likelihood estimators show the direct

connection between the survival probabilities and the numbers of sh detected at each loca-

tion. Standard errors for these estimators can be found in Seber (1982) and Burnham et al.

(1987).

Applying the Model to the Simulated Data Set

The frequencies of the capture histories in Table 1 provide all the information needed to

apply the CJS maximum likelihood estimators:

459

Estimating Survival Using Telemetry

m

2

= n

111

+ n

110

= 237,

r

2

= n

111

= 54,

z

2

= n

101

= 11,

ˆ

i

M

2

= 237/0.8307 = 285.3,

M

1

= R = 300

In Table 2, we compare the analytical estimates from the simulated data set to the known

fates of sh surviving to each detection site over the entire study period. We also t the CJS

model with the software program USER (User-Specied Estimation Routine, Lady and Skal-

ski 2009). This program maximizes the likelihood via an optimization routine and returns

maximum likelihood estimates and their standard errors.

The analytical and numerical estimators produced the same parameter estimates (Table 2),

and both are within 1.1 percentage points of the known-fate estimates. Note that λ accurately

estimates S

2

*p

3

from the known fate data. An important insight here is that our interpretation

about S

2

from λ depends completely on assumptions about the value of p

2

. Given that p

2

can-

not be estimated with the current design, if we had assumed perfect dectability (p

3

= 1), then

our estimate of S

2

is 0.230 when the true value is really 0.365. Similarly, our estimate of S

1

would have been negatively biased if we had not used the mark–recapture model but instead

assumed that p

2

= 1. In this case, substantial negative bias is the consequence of assuming per-

fect detectability. This example highlights the importance of explicitly estimating detection

probabilities using a mark–recapture model in order to obtain unbiased estimates of survival.

Extensions to the Simple CJS Example

In the simple example above, we did not use telemetry arrays located in the forebay or in

the immediate tailrace as shown in Figure 1 of Section 9.1. However, adding data from those

receivers allows us to estimate new parameters and improve precision of existing parameters.

For example, by including the forebay receiver detections as an additional occasion, we can

separately estimate survival 1) between release and the forebay, and 2) between the forebay

and the entrances to the spillway or turbines. Spatial partitioning of survival is the most com-

mon reason to add new receiver locations to a telemetry system.

ˆ

Table 2. Known-fate survival of simulated sh, analytical estimates of parameters, and numerical estimates

obtained from maximizing the likelihood via an optimization routine.

Parameter Known-fate estimates Analytical estimates Numerical estimates (SE)

S

1

282 of 300 = 0.940 285.3/300 = 0.951 0.951 (0.048)

p

2

238 of 282 = 0.844 54/(54 + 11) = 0.831 0.831 (0.047)

S

2

103 of 282 = 0.365 NA NA

p

3

65 of 103 = 0.631 NA NA

λ 65 of 282 = 0.230 54/237 = 0.228 0.228 (0.027)

460

Section 9.2

Extending the model with an additional sampling occasion can also improve precision of

survival estimates. For example, antennas in the tailrace would likely detect transmitters in

sh that had died during dam passage, making it impossible to estimate survival over such a

short distance. Nonetheless, including this detection location in the model could substantially

improve the precision of p

2

and S

1

because precision of the parameters is inuenced by the

number of surviving sh detected downstream of the dam. In the current model, only the 65

sh detected at the river mouth were used to estimate p

2

(Table 1). In contrast, in the simulated

data set, 217 sh were detected in the tailrace. It makes sense to use these detections in the

tailrace to improve precision of S

1

.

To use the tailrace array, a detection location can be added to the CJS model as the next

sampling location after the dam (see Figure 1). Then the survival probability for the reach

between the dam and tailrace array is set to 1, assuming that all sh passing the dam will

also pass the tailrace array, whether alive or dead. With this approach, detections on both

the tailrace and river-mouth telemetry stations are used to estimate p

2

and S

1

, and all other

parameters are interpreted as above. When this model is applied to the data set,

ˆ

S

1

= 0.940

with a standard error of 0.018. This estimate is exactly equal to the known-fate estimate for S

1

,

but the standard error is less than half that obtained from our original model (Table 2)—this

represents a substantial improvement in precision. Another alternative approach is to lump the

dam and tailrace detections together into one occasion. However, maintaining separate occa-

sions provides spatially-explicit detection probabilities that may be insightful for evaluating

the performance of the telemetry system. Further, the approach used here can be applied to

more complex models that estimate route-specic survival (see Multistate models). These

extensions to our simple CJS model illustrate the exibility of using telemetry in conjunction

with mark–recapture models.

Assumptions

CJS models are subject to seven assumptions. These assumptions relate to inferences to

the population of interest, error in interpreting transmitter signals, and statistical t of the data

to the model’s structure:

1) Tagged individuals are representative of the population of interest. For example, if the

target population is wild-origin Chinook salmon Oncorhynchus tsawytscha then the sample

of tagged sh should be drawn from that population. If hatchery-origin Chinook salmon

must be tagged then inferences about survival of wild sh from hatchery sh must be

based on subject matter arguments. Other areas that require representation include size

distribution, health proles, and migration timing. In many telemetry studies this may

be difcult to perfectly achieve, requiring caveats or discussion for proper inference. For

example, the size of an acoustic- or radio-tag may preclude tagging the smallest individuals

of a particular population due to tag-burden concerns, or a study may need to be

implemented during a three-week window of a ve-week migration distribution.

As we’ve noted, careful study design and candid discussion of limitations maximizes

the value of inference.

2) Survival of tagged sh are the same as that of untagged sh. For example, the tagging

procedures or detection of sh at telemetry arrays should not inuence survival or

461

Estimating Survival Using Telemetry

detection probabilities. If the transmitter negatively affects survival, then estimates

of survival rates will be negatively biased relative to untagged sh. Also, using our

simulated example, if the tagging process or transmitter presence affect swimming

performance, tagged sh may not pass through spillway or turbine routes in the

same proportions as untagged sh.

3) All sampling events are instantaneous. When the spatial or temporal scale of sampling

occasions approaches the scale of time or distance between occasions, then it becomes

difcult to correctly attribute mortality to a specic sampling period. For example, for

spatial mark–recapture models, if the detection range of a telemetry station is 200 m, but

the distance between telemetry stations is only 100 m, it becomes difcult to determine

where mortality occurs. Thus, when estimating survival through space, sampling should

take place over a short distance (e.g., hundreds of meters) relative to the distance between

telemetry stations (e.g., kilometers). Likewise, when estimating survival over time,

sampling occasions should be short (e.g., one day) relative to the time between sampling

occasions (e.g., weeks). One way to address this assumption is to place telemetry stations

in locations, if possible, where tagged sh move relatively quickly past an array and do

not reside with the detection range for an extended period.

4) The fate of each tagged sh is independent of the fate of other tagged sh. If sh exhibit

some sort of group behavior (i.e., schooling), sample size is effectively reduced and

precision estimates become biased downward.

5) The prior detection history of a tagged sh has no effect on its subsequent survival. For

example, in mark–recapture studies requiring physical recapture of animals, the capture

process may inuence subsequent survival. For telemetry, this assumption is usually

satised by the passive nature of detecting tags.

6) All tagged sh alive at a sampling location have the same detection probability.

7) All tags are correctly identied and the status of tagged sh (i.e., alive or dead) is known

without error. This assumes sh do not lose their tags and that the tag is functioning while

the sh is in the study area. Additionally, this assumption implies that all detections are

of live sh, that dead sh are not detected and interpreted as live, and that spurious

detections (e.g., false positive detections) are not interpreted as live sh. This assumption

is violated in cases where a detection location occurs close enough below a point-source

mortality (e.g., dam passage) such that dead sh are detected at the downstream telemetry

array. The consequence of violating this assumption is that mortality in one reach is

attributed to a downstream reach.

Assumptions 5 and 6 can be formally tested using χ

2

Goodness of Fit tests known as Test

2 and Test 3 (Burnham et al. 1987). Both Test 2 and 3 are implemented as a series of contin-

gency tables. Test 2 is informally known as the “recapture test” because it assesses whether

detection at an upstream array affects detections at subsequent downstream arrays (assump-

tion 6). Test 3 is known as the “survival test” because it assesses assumption 5 that sh alive

at array i have the same probability of surviving to array i + 1. With telemetry data, these tests

462

Section 9.2

are sometimes uninformative because detection probabilities are so high that cell frequencies

are too small for valid contingency table statistics (i.e., many zero frequencies in the “nonde-

tected” category).

Tag failure can be evaluated to formally test Assumption 7 that tags do not fail prior to sh

exiting the study area. A controlled tag life study can be conducted to estimate the probability of

tag failure at any point in time after tags were turned on. The methods of Townsend et al. (2006)

can then be used to estimate the average probability that a tag was alive while sh were in the

study area. If tags fail prior to exiting the study area, then information from the tag life study

can be used to adjust survival estimates for the probability of tag failure (Townsend et al. 2006).

It should be noted that nearly all telemetry studies violate most or all of these assumptions

to some degree. It is logical to assume that the handling, tagging, and release protocols of a

study, though carefully implemented, still have biological and behavioral effects on tagged

individuals. This does not mean that inference cannot be made from the results of the study,

only that researchers must report, and take care in how they present, the “challenges” of their

data and results and put proper caveats on the potential limitations on their inference.

USING REPLICATED TELEMETRY ARRAYS TO ESTIMATE DETECTION

PROBABILITY

In our simple CJS model, we were unable to estimate survival from the dam to the river

mouth (S

2

) without making assumptions about detection probability at the last telemetry sta-

tion (p

3

). To estimate p

3

and S

2

, we need an additional telemetry array downstream of our nal

array. In some cases, another receiver could simply be installed further downstream, and the

model extended accordingly to estimate S

2

, p

3

, and a new λ = S

3

*p

4

. However, since the nal

station is already at the mouth of the river, our best option is to implement two independent

telemetry stations at the same location. Let us assume that this detection station always con-

sisted of two sets of independent telemetry stations, but that we did not explicitly use this

information in our CJS model. The receivers at this location are so close together, that we can

safely assume that no mortality occurs between them. As with the tailrace example above, one

approach is to modify the CJS model by adding an additional reach and setting the survival

probability to one. Another option is to use a separate model based on the Lincoln-Peterson

method (Seber 1982) that estimates detection probability from the two monitoring stations.

The detections at each array yield a two-digit capture history dening whether sh were

detected on both arrays (“ab”), or only one or the other array (“a0” or “0b”). The detection

history probabilities are given in Table 3. Because we have an independent likelihood for the

nal monitoring station, the total likelihood for combined CJS and dual-array model is the

product of the two likelihoods (i.e., L

1

*L

2

). The idea is to use the dual-array model to estimate

p

3

, and then feed p

3

into the likelihood for the CJS model, allowing us to estimate S

2

. The

advantage of combining the models is that the resulting estimates incorporate uncertainty

arising from both models. This technique is analogous to the robust design of Pollock (1982).

Applying the combined CJS with dual array model to the simulated data, we nd that

ˆ

p

3a

=

0.40 (SE = 0.077),

ˆ

p

3b

= 0.39 (SE = 0.076),

ˆ

p

3

= 0.634 (SE = 0.083), and

ˆ

S

= 0.359 (SE =

0.064). Our estimates for p

3

and S

2

agree well with the known fates (Table 2). By adding the

dual array to the telemetry system and extending the model with the second likelihood, we are

now able to estimate survival of tagged sh through the entire system. We know that 34.3% of

463

Estimating Survival Using Telemetry

simulated sh (103 of 300) survived to the ocean, whereas our estimate using a mark–recap-

ture model is

ˆ

S

1

ˆ

S

2

= 0.951*0.359*100 = 34.1%. This example shows how the design of the

telemetry system is directly linked to the structure of the mark–recapture model.

Assumptions

The primary additional assumption of the replicate array model is that the two telemetry

arrays are statistically independent. That is, sh detected by one array must constitute a ran-

dom sample of sh passing through the detection eld of the second array. If the replicate

arrays are not independent, then estimates of detection probability may be biased, which may

also induce bias in estimates of survival. In the example above, this assumption can be violat-

ed if the detection elds of the dual arrays do not encompass the entire channel cross section,

or if they were both using the same power source and service was interrupted simultaneously.

Perry and Skalski (2008) assess the consequences of failing the assumption of independent

arrays and provide guidelines for conguration of telemetry arrays.

PARTITIONING COMPONENTS OF SURVIVAL

Survival between the dam and the ocean arises as a function of multiple mortality pro-

cesses. In our simulation, sh experienced mortality due to passage through the dam and

additional mortality during migration in the river below the dam. The estimate of S

2

above

includes both of these processes, but what if we were interested in separating mortality due to

dam passage from other sources of mortality? As previously mentioned, estimating survival

to the tailrace using telemetry arrays is difcult because all tagged sh, alive and dead, may

be detected at this location. The solution is to add a second release site immediately below

the dam. The idea is that the tailrace release group experiences only mortality unrelated to

dam passage in the river segment between the dam and the river mouth. Survival estimates

from both release sites can then be used to partition S

2

for the upstream release into dam and

in-river components (Figure 2). This is the “paired-release” model rst dened by Burnham et

al. (1987) and tailored to radio telemetry by Skalski et al. (2001). Although we present a spe-

cic example of the paired release model, the model has general application to any two groups

of sh that experience some common source of mortality, with one of the groups experiencing

some additional source of mortality that we wish to estimate.

To implement the paired release model for this study, we re-write S

2

as a function of its

underlying components and add subscripts for clarity (Figure 2):

Table 3. Probability of occurrence for each detection history of the dual-array model, where p

a

and p

b

are

the detection probability on the rst array and second array. The probability of being detected by at least

one of the arrays is p

3

= 1 – ( 1 – p

3a

)(1 – p

3b

). Also given is the number of sh with each capture history

in the simulated data set.

Detection history Probability of occurrence (π

i

) n

i

ab p

3

a

p

3

b

/p

3

16

a0 p

3

a

(1 – p

3

b

)/p

3

25

0b (1 – p

3

a

)p

3

b

/p

3

24

464

Section 9.2

S

2,1(dam & river)

= S

2,1(dam)

S

2,1(river)

(1)

where S

2,1(dam & river)

is the survival probability through reach 2 for release site 1 (see Figure 2)

and “dam & river” indicates that this survival probability is comprised of mortality due to

passing the dam and also due to factors unrelated to dam passage in the stretch of river below

the dam. The right hand side of Eqn. 1 separates the mortality sources into survival probabili-

ties due to each source. Survival from the tailrace to the ocean is estimated from the second

release group (S

2,2(river)

). If we assume S

2,1(river)

= S

2,2 (river)

then Eqn. 1 can be rearranged to yield:

2,1(dam & river) 2,1(dam & river)

2,1(dam)

2,1(river) 2,2(river)

SS

S

SS

==

(2)

This model now has four likelihoods and eight unique parameters: a two-reach CJS model

for the upstream release (3 parameters), a one-reach CJS model for the tailrace release (1

parameter), and two dual-array models (2 parameters each) to estimate p

3,i

for each release

location. Note that the parameter S

2,1(dam)

is not part of the likelihood, but rather is derived as a

function of model parameters. Derived parameters are often of main interest, as illustrated in

this example. However, since these parameters are not explicitly estimated, the standard error

of derived parameters is usually estimated using the Delta method (Seber 1982).

The paired release model estimates the known-fate proportions closely (Table 4) and

yields insights about mortality processes affecting our simulated population. The estimates

suggest that mortality in the river below the dam is greater than mortality due to dam passage.

Release 1

S

1,1

S

2,1 (river)

p

2

p

3, j

Release 2

S

2,2 (river)

Dam

Ocean

S

2,1 (dam)

Figure 2. Schematic of the paired release model showing S

i,j

for reach i and release location j. The

dashed line indicates the release location in the tailrace where survival is partitioned into dam-related and

in-river components. Note that S

2,1(dam)

and S

2,1(river)

can be estimated only by assuming that S

2,1(river)

= S

2,2(river)

.

465

Estimating Survival Using Telemetry

Assumptions

The paired release model is subject to the same assumptions of the CJS model plus the

following additional assumption:

1) Survival is equal for upstream and tailrace release groups between the tailrace release

point and the rst downstream telemetry array. Inequality between these groups can result

in either positive or negative bias, depending on the direction of the inequality (see

Equation 2). This assumption implies that any handling and tagging mortality has

been expressed prior to release.

MULTISTATE MODELS

In our example of sh passing through a dam, we have yet to address the fact that sh

passed through different routes and under different treatments of dam operations. What is the

survival of sh passing through each route? What fraction of the population passes through

each route? Do the treatments inuence passage and survival? Multistate models offer a ex-

ible framework in which to answer these questions (Nichols and Kendall 1995; Lebreton and

Pradel 2002). We rst present the general framework of multistate models, and then show

how our example can be cast as a multistate model. Last, we extend the model to include dif-

ferent kinds of state variables and illustrate how the multistate model can be used as a basis

for evaluating hypotheses.

Like CJS models, multistate models estimate survival and detection probabilities. How-

ever, multistate models also allow individuals to be stratied into groups (states) to estimate

survival and detection parameters separately for each group. Individuals are allowed to move

(transition) from one state to another between sampling occasions. Multistate models estimate

the probability that an individual transitions from one state to another, and also the state-spe-

cic survival probabilities. For example, these models were rst designed to examine move-

ment probabilities of animals among different geographic locations and survival probabilities

for each geographic location (Brownie et al. 1993; Schwarz et al. 1993).

To describe the model, we use the notation of Brownie et al. (1993), but this notation is

later adapted for models specic to our simulated study. The fundamental parameters esti-

mated by the multistate model are:

rs

i

φ

= joint probability of surviving from sampling occasion i to i + 1 and moving from state

r at sampling occasion i to state s at occasion i + 1.

s

i

p

= probability of detection in state s at occasion i.

Table 4. Estimates of selected parameters from the simulated data set compared against the known fates

of sh for the paired-release model shown in Figure 2.

Parameter Known-fate estimate Maximum likelihood estimate (SE)

S

2,1(dam)

224 of 282 = 0.794 0.777 (0.179)

S

2,1 (river)

103 of 224 = 0.460 0.463 (0.068)

466

Section 9.2

Given multiple states, it is convenient to express these parameters in matrix form, here

using three states for simplicity:

11 12 13

21 22 23

31 32 33

iii

ii

ii

iii

φφφ

φφφ

φφφ

=

φ

and

1

2

3

00

00

00

.

i

i

i

i

p

p

p

=

p

When all states are sampled at every occasion and animals move among all states, all

parameters are estimable except for the nal interval, similar to the CJS model. In the param-

eterization above,

rs

i

φ

includes the underlying probabilities of both surviving and moving

between states. To estimate the underlying parameters, the model can be reparameterized as

a function of

r

i

S

, the probability of surviving from occasion i to i + 1 conditional on being in

state r at occasion i; and

rs

i

Ψ

, the probability of transitioning from state r at occasion i to state

s at i + 1, conditional on surviving to i + 1. Using the three-state example, the reparameteriza-

tion is

123rr

rr

ii

ii

S

φφφ

=++

and

.

rs

rs

i

i

r

i

S

φ

Ψ=

Note that

rs

i

Ψ

for a particular state s must be specied as one minus the sum of the others

because the transition probabilities are constrained to sum to one.

The Route-Specific Survival Model

The route-specic survival model introduced by Skalski et al. (2002) is a multistate model

designed to estimate the proportion of sh passing through each route at a dam, and then sur-

vival conditional on the route of passage. In our study, we have two passage routes, yielding

the model shown in Figure 3.

The route-specic survival model requires replicate telemetry arrays with each passage

route in order to estimate route-specic detection probabilities. With only a single detection

array in each route, some sh will pass through a route undetected and although detected

downstream, the route of passage of these sh is unknown. Without replicate telemetry arrays,

we can estimate the average detection probability over both routes, but if detection probabili-

ties differ between routes, then estimates of the proportion of sh passing each route will be

biased. By installing two sets of antennas in each passage route, the detection probability of

each route can be estimated, allowing the remaining parameters to be estimated without bias.

Assumptions

Multistate models are subject to the same assumptions as the CJS model plus two addi-

tional assumptions:

1) In order to separate state-specic survival from state transition probabilities, we must

assume that all mortality occurs rst and then sh transition “instantaneously” at

467

Estimating Survival Using Telemetry

the end of the sampling period. For the route-specic model described above, this

assumption is fullled because mortality occurs upstream of the dam prior to sh

passing (transitioning) through either the spillway or turbines. An example where

this assumption may be violated is when a river splits into multiple channels and the

goal is to estimate the probability of entering a given river channel. In this case, if

monitoring stations are located considerably downstream of the river junction, transition

probabilities may be biased if survival probabilities differ in the two channels

downstream of the river junction (see Perry 2010).

2) There is no error in assignment of states to individuals. For example, for the route-specic

model described above, we assume that no sh are wrongly assigned to a passage route.

TAILORING MARK–RECAPTURE MODELS TO STUDY GOALS

Armed with the suite of models discussed thus far, we have the ability to tailor the te-

lemetry system and mark–recapture model to answer specic questions in a study. In our

simulated study, the dam was operated at high and low spill discharge implemented over six

48-h treatment periods. The primary objective is to determine dam operations that maximize

rates of passage through the safest possible route. A priori, it is assumed that survival through

turbines is lower than the spillway. Therefore, dam operations may affect survival at the popu-

lation level by inuencing the proportions of sh passing through each route. We also suspect

a temporal component to survival, with sh passing the dam later in the study having lower

S

1

Ψ

Tu

Ψ

Sp

=1- Ψ

Tu

p

Tu

p

Sp

S

Tu

S

Sp

Release

p

3

Ocean

Figure 3. Schematic of the route-specic survival model (Skalski et al. 2002) showing route passage

probabilities (Ψ) for the turbines (Tu) and spillway (Sp), route-specic detection probabilities (p

Tu

, p

Sp

), and

survival from the dam to the ocean conditional on having passed through the turbines (S

Tu

) or spillway (S

Sp

).

Route-specic detection probabilities are estimated by the dual array model presented earlier.

468

Section 9.2

survival to the ocean. We would like to quantify this relationship to link migration delay

above the dam to survival below the dam.

How should we structure the model to assess these hypotheses? One can envision a num-

ber of ad hoc approaches to address the questions at hand. The rst that comes to mind is

constructing separate models for each of the six release groups above the dam. Most of the

sh pass within a given treatment period, but some do not. This approach would likely capture

major differences among treatments, but the estimates are not unique to a given treatment and

represent a mixture of sh passing during multiple treatments. Another approach is to con-

struct a separate model for each treatment, assigning sh to a treatment based on when they

pass the dam. This approach is undesirable because we must exclude sh that are not detected

in passage routes and estimating S

1

is impossible (by denition, only sh surviving to a pas-

sage route are included).

The answer is to generalize the multistate model to represent different kinds of state vari-

ables. State variables need not represent geographical locations. States can be dened as any

classication variable that can be identied at the time of recapture. For example, multistate

models have been used to estimate transitions among weight classes (Letcher and Horton

2009) and between breeding and nonbreeding status (Nichols et al. 1994). In our case, we

dene an additional state variable representing each 48-h time period. Thus, sh detected

in a passage route were assigned to one of the six 48-h time periods; the low-spill treatment

occurred during time periods 1, 4, and 5 and the high-spill treatment during the other time

periods (see Figure 2 in Section 9.1). Although sh not detected in passage routes cannot be

assigned to one of the six time periods, the probability function for this detection history in-

corporates the possibility that sh could have passed during any of the six time periods.

This design yields 12 unique combinations of the state variables (2 routes × 6 time-peri-

ods), and thus 12 unique transition and survival probabilities. We must also construct 12 dual

array likelihoods to estimate the detection probabilities for each route in each time period.

Next, we want to estimate the fraction of the population that passes through each route during

each time period. The branching process in Figure 4 shows how this model can be structured.

The parameter τ

t

estimates the probability of sh passing the dam during time period t, Ψ

Tu,t

estimates the probability of passing through the turbines given the sh passed the dam during

time period t, and S

Tu,t

is the probability of surviving from the dam to the ocean given passage

through the turbines during time period t.

At this point in designing the model, S

Tu,t

and S

Sp,t

includes both mortality due to dam

passage and in-river mortality below the dam. Therefore, we included the second release in

the tailrace, so that we could estimate S

Tu,t (dam)

and S

Tu,t (river)

using a paired release design. Of

primary concern here is whether mortality due to dam passage differs between routes.

This model now has 68 parameters, many more than previous models: The multistate

model for the upstream release has 24 parameters, the CJS models for sh released below the

dam has 6 parameters, and the replicate array models estimate 38 parameters. Since param-

eters are estimated for many subgroups, precision of parameters under this full model will

be poor. Furthermore, we would like to evaluate hypotheses about the effect of dam opera-

tions on survival and use of passage routes. We will use model selection approaches based

on Akaike’s Information Criteria (AIC) to both improve parameter precision and evaluate

biological hypotheses (Burnham and Anderson 2002). Among candidate models, the model

with the smallest AIC value represents the model with the most favorable tradeoff between

precision and accuracy of the estimates (i.e., over-tting versus under-tting). The difference

469

Estimating Survival Using Telemetry

in AIC values (ΔAIC) between two models represents the degree of support for one model

over another. As a general rule of thumb, ΔAIC < 2 indicates little or no evidence that either

model is more appropriate than the other (Burnham and Anderson 2002).

To compare models using AIC, we constructed a number of reduced models that rep-

resent different hypotheses about the parameters. For example, detection probabilities are

estimated for each route and time period, but if detection probabilities do not vary over time

then estimating a single detection probability for all time periods will improve precision by

pooling information over all time periods. Comparing AIC of the full model against a model

with detection probability constant over time provides evidence of support for one hypothesis

over another.

We used a step-down approach to form and compare models (Lebreton et al. 1992). First,

we evaluated whether detection probabilities varied by time period or release location. We

compared the full model where all detection probabilities were estimated separately for each

time period and release location to a model where detection probabilities for each route were

constant over time and among release locations (denoted model “p(.),” see Table 5). The

best-t model for detection probability was then used as a basis for evaluating remaining hy-

potheses. Next, we compared three models for the tailrace release group to evaluate whether

survival in the reach below the dam was constant over time, unique for each time-period, or

declined exponentially over time (i.e., S = e

–rt

). Using the best t model based on detection

probabilities and downriver survival, we then evaluated whether survival for each passage

S

1

Release

τ

1

τ

2

...

...

Ψ

Tu, 1

Ψ

Sp,1

p

Tu, 1

p

Sp,1

S

Tu,1

S

Sp, 1

p

3

Ocean

5

1

1

t=

−

∑

τ

t

τ

6

=

Dam

Ψ

Tu, 6

Ψ

Sp,6

p

Tu, 6

p

Sp,6

S

Tu,6

S

Sp, 6

p

3

Figure 4. Schematic of multistate model for estimating passage (Ψ) and route-specic survival (S

Tu

and S

Sp

)

probabilities conditional on the time period of dam passage.

470

Section 9.2

route was 1) constant over time, 2) constant within each treatment but different between treat-

ments, and 3) unique for each time period. Finally, using the best-t model at this stage, we

evaluated the same three hypotheses for passage probabilities (i.e., constant for each route,

treatment-specic, or time-specic). We then present the estimates for the best t overall

model.

The AIC-best model (Table 5) included constant detection probabilities at each location,

survival below the dam declining exponentially with time (Figure 5), no effect of treatments

on route-specic survival, but differences between treatments in the proportion of sh pass-

ing through the turbines. The probability of passing the turbines (Ψ

Tu

) was much higher dur-

ing the low-spill treatment compared to the high-spill treatment, and the estimates match the

known-fates well (Table 6). Survival was lower through the turbines than through the spill-

way. Although some of the parameter estimates differ considerably from the known fates, the

standard errors are large and the estimates fall well within the 95% condence intervals (i.e.,

the estimate ± 1.96 standard errors). The exponential rate of decline in downstream survival

also t the known-fate data quite well (Figure 5). Last, the best-t model selected by AIC fol-

lowed the structure of the true model from which the data were generated.

Dam-passage survival can be estimated as a function of the route-specic survival and

passage probabilities as

S

2,1 (dam)

= Ψ

Tu

S

Tu,(dam)

+ (1 – Ψ

Tu

)S

Sp,(dam)

.

In our simulated population, the estimate of S

2,1

(dam) was 0.671 (SE = 0.105) during

the low-spill treatment, but 0.906 (SE = 0.123) during the high spill treatment. Since route-

specic survival did not differ between treatments, the lower overall survival during the low-

spill treatment can be attributed to a higher fraction of tagged sh passing through the turbines

during this treatment.

OTHER COMPONENTS OF MARK-RECAPTURE MODELING

In this Section, we have illustrated the exibility of using mark–recapture models to

analyze telemetry data. The examples provided here provide a launching pad for developing

Table 5. Model selection statistics for evaluating different hypotheses about detection (p), survival (S), and

routing probabilities (Ψ). Statistics are as follows: NLL = negative log likelihood, AIC = Akaike’s Information

Criterion, and ΔAIC = difference in AIC between model i and the lowest-AIC model.

Model Number of parameters NLL AIC ΔAIC

Full 63 107.4 340.9 52.7

p(.) 35 120.1 310.1 21.9

p(.), S

2,2 (river)

(.) 30 137.4 334.9 46.7

p(.), S

2,2 (river)

(exp(t)), 30 120.6 301.2 13.0

p(.), S

2,2 (river)

(exp(t)), S

route, (dam)

(treat) 23 124.4 294.8 6.6

p(.), S

2,2 (river)

(exp(t)), S

route, (dam)

(.) 21 124.4 290.8 2.6

p(.), S

2,2 (river)

(exp(t)), S

route,(dam)

(.), Ψ

Tu

(treat) 17 127.1 288.2 0.0

p(.), S

2,2 (river)

(exp(t)), S

route,(dam)

(.), Ψ

Tu

(.) 16 165.0 362.0 73.8

471

Estimating Survival Using Telemetry

mark–recapture models to apply to your own telemetry studies. However, given the introduc-

tory nature of this Section, we have yet to discuss a number of topics that every practitioner

of mark–recapture modeling must address. In closing, we briey identify some of these topics

so integral to mark–recapture modeling.

Evaluating Goodness of Fit

All models are capable of producing estimates, but not all estimates are good estimates

in the sense of optimizing the balance between precision and bias. In our model selection

example, we used AIC to identify the “best” model, but we still need to determine if the best

model is a good one. Evaluating goodness of t is necessary to assess how well the model ts

the data. Lack of t may be induced by overdispersion or by failure of model assumptions.

Overdispersion is dened as more variability in the data than is expected given the model

0.00.2 0.40.6 0.8

04896 144 192 240 288

Time (h) since first release

S

2,2 (river)

Figure 5. Known-fate proportions surviving to the ocean for each tailrace release group (circles) compared

to the tted survival function from the best-t model (line).

Table 6. Maximum likelihood estimates (MLE) of selected parameters from the lowest-AIC model from the

simulated data set compared against the known fates of sh.

Parameter Known-fate estimate MLE (SE)

Ψ

Tu, low spill

90 of 117 = 0.769 0.773 (0.044)

Ψ

Tu, high spill

36 of 165 = 0.218 0.217 (0.034)

S

Tu,(dam)

84 of 126 = 0.667 0.575 (0.121)

S

Sp,(dam)

140 of 156 = 0.897 0.998 (0.146)

r 0.005 0.006 (0.0007)

472

Section 9.2

structure and is common when the parameter values are dependent upon some state or trait

that is not incorporated in the model. Typically, when data are overdisperse, estimates from

a mark–recapture model will be unbiased, but variance estimates will tend to be negatively

biased. Therefore, it is important to test for overdispersion and adjust (inate) the variance

when overdispersion has been detected. There are a number of methods for estimating over-

dispersion (Lebreton et al. 1992; White et al. 2001), but most involve estimation of

ˆ

C

, the

variance ination factor. When the data t the model’s structure perfectly, then

ˆ

C

= 1.0. Lack

of t is indicated by

ˆ

C

> 1, in which case the common practice is to inate the variance, as

well as model selection statistics, by a factor of

ˆ

C

(Burnham and Anderson 2002).

Mark–Recapture Software

Most biologists lack the mathematical and computer programming experience that are re-

quired to construct, t, and diagnose mark–recapture models of practical scale without the help

of automated software routines. Fortunately, computer software packages are available for con-

structing models and tting them to the data. A good software program should: 1) be exible

(allow custom models to be built), and 2) provide standard errors and prole likelihood con-

dence intervals of parameter estimates (see Lebreton et al. 1992), including derived parameters.

Integrated diagnostics and model selection criteria are also preferred. In our examples we used the

software program USER (Lady and Skalski 2009) to t the models to the data (i.e., maximize the

likelihood function) and obtain estimates for derived parameters (e.g., S

2,1(dam)

). USER is a exible

environment for model development because the model structure and parameters are completely

user-specied. Thus, any custom model can be tailored to a specic telemetry study and then t

to the data. However, complete specication of model likelihoods can be a cumbersome task and

requires a minimum level of quantitative savvy, especially for large, complex models.

Perhaps, the most common mark–recapture software program is MARK (White and Burn-

ham 1999). MARK has many well-developed model structures, including diagnostic tools,

goodness of t tests, and model selection criteria (White et al. 2001). However, dening the

structure of complex or unique models can be cumbersome or impossible with MARK. Recent

development of the R package RMark (Laake and Rexstad 2011) has improved the ability to

build models and perform model selection with MARK, but requires some knowledge of the R

language.

MARK and USER are simply two of several available software packages that have en-

hanced the eld of mark–recapture modeling by simplifying the cumbersome process of model

building and tting. Behind every graphical user interface, however, is a host of complex pro-

cesses that affect the output. Practitioners should be familiar with likelihood estimation (includ-

ing variances and condence intervals), model selection, and linear model terminology. We

discourage use of software without an understanding of the underlying statistical theory, such as

optimization routines, link functions, model selection criteria, and goodness of t tests. When

used appropriately, these tools can free biologists from the time-consuming task of model con-

struction and allow them to focus on the biological hypotheses behind the models.

Sample Size and Precision

Determining sample size required to achieve some desired level of precision is an im-

portant aspect of study design, particularly for telemetry studies where scare resources must

473

Estimating Survival Using Telemetry

be allocated between transmitters and telemetry monitoring stations. So then, how many sh

need to be tagged to obtain a given level of precision? The answer depends on the model

structure, the number of sampling occasions, the true values of the parameters, and biological

variation in the parameters. For example, lower detection probabilities require higher sample

sizes to obtain a given level of precision.

To determine sample sizes, simulation studies can be used to estimate precision of esti-

mated parameters (Devineau et al. 2006). The rst step is to assume a set of detection and

survival probabilities that might be obtained in a given study. These values can be obtained

from a pilot study, literature search, or expert opinion. Next, the expected value of capture

history frequencies is calculated based on a given sample size and the assumed detection and

survival probabilities. Finally, the mark–recapture model is applied to this simulated data set

to yield the expected standard errors of survival and detection probabilities.

Fortunately, a number software programs exist to facilitate sample size and precision

analysis. The software program SampleSize (Lady et al. 2003) is a simple-to-use program for

determining precision and sample size with CJS and paired-release models. Program MARK

also contains a simulation module that can be used for conducting sample size analysis for

many of the models implemented in MARK. These software packages allow users to quickly

run a number of different scenarios that vary parameter values, model structures, and sample

sizes to develop a robust study design likely to achieve desired levels of precision.

Modeling Biological Hypotheses

In our application of the route-specic survival model to the simulated sh passage study,

we introduce the idea of tting mark–recapture models that represent different biological hy-

potheses. For example, model selection results showed strong evidence that riverine survival

was not constant, but declined exponentially over time (Table 4; Figure 5). This conclusion

may lead biologists to further evaluate time-dependent mortality processes, such as physi-

ological condition, water temperature, or changes in predator densities.

Hypothesis-driven model selection begins with a general model (i.e., full model) as a rep-

resentation of the underlying biological system. We then develop a set of reduced models that

each represents a specic set of constraints. Each constraint represents an alternative view

(i.e., hypothesis) of the biological system. Identifying candidate models requires knowledge

of the biological system, and is therefore not a statistical exercise (Lebreton et al. 1992).

Reduced models may estimate parameters as functions of categorical or continuous co-

variates, such as rearing history or size at release. The simplest constraints involve categorical

covariates that are xed throughout the study. This would be used, for example, to determine

if rearing history (e.g., hatchery, wild) has a “signicant” effect on survival. Multistate models

offer the ability to evaluate similar hypotheses when the categories (i.e., states) can change

for a given sh during the study. For example, in our route-specic survival model, the route

state for an individual sh could not be known at the time of release. Further, the route assign-

ment was never known for sh that passed the dam undetected. Still, model selection in the

multi-state framework allowed us to test the hypothesis that survival differed among passage

routes (Table 4). In this way, multi-state models are a powerful tool for examining the effects

of categorical state variables on demographic parameters.

Categorical predictor variables can also represent characteristics of the sampling occa-

sion, rather than of the tagged sh. In a classic example, Lebreton et al. (1992) showed that

474

Section 9.2

survival of European dippers Cinclus cinclus was lower during ood years versus nonood

years. Analogous examples for migrating salmon might be to compare survival through river

reaches that are freshwater versus estuarine, or through industrialized versus natural habitats.

In the dipper example, sampling took place at relatively regular intervals (annually). How-

ever, telemetry receivers are never uniformly spaced throughout a study system. Therefore, to

appropriately compare survival among individual river reaches, reach-specic survival esti-

mates must be standardized (by reach length) within the likelihood. An intuitive approach is

to estimate instantaneous survival instead of reach-specic survival, allowing direct compari-

son and hypothesis testing. Further, any other reach-specic survival rate (e.g., reach-specic

survival or survival rate per river km) can also be derived from the instantaneous rates.

Survival is a key demographic parameter that describes the dynamics of a population.

Combined with other data sources and analyses (e.g., time-to-event analysis in Section 9.1),

telemetry and mark–recapture models can be used to understand the inuence of external

processes on populations (e.g., hydro-system regulation, habitat alternation, articial rear-

ing, water quality improvements). These analytical frameworks provide powerful tools for

analyzing telemetry data, and we hope that our introduction to these tools encourages more

biologists to consider them as an integral component of telemetry studies.

REFERENCES

Brownie, C., J. E. Hines, J. D. Nichols, K. H. Pollock, and J. B. Hestbeck. 1993. Capture-recapture studies for

multiple strata including non-Markovian transitions. Biometrics 49:1173–1187.

Burnham, K. P., D. R. Anderson, G. C. White, C. Brownie, and K. H. Pollock. 1987. Design and analysis

methods for ﬁsh survival experiments based on release-recapture. American Fisheries Society, Mono-

graph 5, Bethesda, Maryland.

Burnham, K. P. and D. R. Anderson. 2002. Model selection and multimodel inference: a practical informa-

tion-theoretic approach. Springer, New York.

Cormack, R. M. 1964. Estimates of survival from the sighting of marked animals. Biometrika 51:429–438.

Devineau, O., R. Choquet, J. D. Lebreton. 2006. Planning capture-recapture studies: straightforward preci-

sion, bias, and power calculations. Wildlife Society Bulletin 34: 1028–1035.

Jolly, G. M. 1965. Explicit estimates from capture-recapture data with both death and immigration-stochas-

tic model. Biometrika 52:225–247.

Laake, J., and E. Rexstad. 2011. RMark—an alternative approach to building linear models inMARK. (Ap-

pendix C). In E. Cooch and G. White editors, Program MARK: a gentle introduction, 9th editor. [Online].

Available: http://www.phidot.org/software/mark/docs/book/ (September 2011).

Lady, J. M. and J. R. Skalski. 2009. USER 4: User-speciﬁed estimation routine. Available: http://www.cbr.

washington.edu/paramest/docs/user/UserManual/UserManual.pdf (August 2011).

Lady, J. M., P. Westhagen, J. R. Skalski. 2003. SampleSize1.1: Sample size calculations for ﬁsh and wildlife

survival studies. Available: http://www.cbr.washington.edu/paramest/docs/samplesize/manual/user.pdf

(April 2012).

Lebreton, J. D., K. P. Burnham, J. Clobert, and D. R. Anderson. 1992. Modeling survival and testing biologi-

cal hypotheses using marked animals: A uniﬁed approach with case studies. Ecological Monographs

62:67–118.

Lebreton, J. D., and R. Pradel. 2002. Multistate recapture models: modeling incomplete individual histories.

Journal of Applied Statistics 29:353–369.

Letcher, B. H., and G. E. Horton. 2009. Seasonal variation in size-dependent survival of juvenile Atlantic

salmon (Salmo salar): performance of multistate capture-mark-recapture models. Canadian Journal of

Fisheries and Aquatic Sciences 65:1649–1666.

475

Estimating Survival Using Telemetry

McCormick, S. D., L. P. Hansen, T. P. Quinn, and R. L. Saunders. 1998. Movement, migration, and smolting

of Atlantic salmon (Salmo salar). Canadian Journal of Fisheries and Aquatic Sciences 55:77–92.

Nichols, J. D., and W. L. Kendall. 1995. The use of multi-state capture–recapture models to address ques-

tions in evolutionary ecology. Journal of Applied Statistics 22:835–846.

Nichols, J. D., J. E. Hines, K. H. Pollock, R. L. Hinz, and W. A. Link. 1994. Estimating breeding proportions and

testing hypotheses about costs of reproduction with capture-recapture data. Ecology 75:2052–2065.

Perry, R. W. 2010. Survival and migration dynamics of juvenile Chinook salmon (Oncorhynchus tshawytscha)

in the Sacramento-San Joaquin River Delta. Doctoral dissertation. University of Washington, Seattle,

Washington.

Perry, R. W. and J. R. Skalski. 2008. The design and analysis of salmonid tagging studies in the Columbia

Basin. Volume XXIII: Effects of array conﬁguration on statistical independence of replicate telemetry

arrays used in smolt survival studies. Technical report to BPA, Project No. 198910700. U.S. Department

of Energy, Division of Fish and Wildlife. Portland, Oregon.

Pollock, K. H. 1982. A capture-recapture design robust to unequal probability of capture. Journal of Wildlife

Management 46:757–760.

Schwarz, C. J., J. F. Schweigert, and A. N. Arnason. 1993. Post-release stratiﬁcation in band-recovery mod-

els. Biometrics 44:765–785.

Seber, G. A. F. 1965. A note on the multiple recapture census. Biometrika 52:249–259.

Seber, G. A. F. 1982. The estimation of animal abundance. MacMillan, New York, New York.

Skalski, J. R., S. G. Smith, R. N. Iwamoto, J. G. Williams, and A. Hoffmann. 1998. Use of PIT-tags to estimate

survival of migrating juvenile salmonids in the Snake and Columbia Rivers. Canadian Journal of Fisher-

ies and Aquatic Sciences 55:1484–1493.

Skalski, J. R., J. Lady, R. Townsend, A. Giorgi, J. R. Stevenson, C. M. Peven, and R. D. McDonald. 2001.

Estimating in-river survival of migrating salmonid smolts using radiotelemetry. Canadian Journal of

Fisheries and Aquatic Sciences 58:1987–1997.

Skalski, J. R., R. Townsend, J. Lady, A. Giorgi, J. R. Stevenson, C. M. Peven, and R. D. McDonald. 2002.

Estimating route-speciﬁc passage and survival probabilities at a hydroelectric project from smolt radio-

telemetry studies. Canadian Journal of Fisheries and Aquatic Sciences 59:1385–1393.

Townsend, R. L., J. R. Skalski, P. Dillingham, and T. W. Steig. 2006. Correcting bias in survival estimation

resulting from tag failure in acoustic and radiotelemetry studies. Journal of Agricultural, Biological, and

Environmental Statistics 11(2):183–196.

White, G. C., and K. P. Burnham. 1999. Program MARK: survival estimation from populations of marked

animals. Bird Study 46(Supplement):120–138.

White, G. C., K. P. Burnham, and D. R. Anderson. 2001. Advanced features of Program Mark. Pages 368–

377 in R. Field, R. J. Warren, H. Okarma, and P. R. Sievert, editors. Wildlife, land, and people: priorities

for the 21st century. Proceedings of the Second International Wildlife Management Congress. The

Wildlife Society, Bethesda, Maryland, USA.