ChapterPDF Available

Feasible Shared Destiny Risk Distributions

Authors:

Abstract

Social risk equity is concerned with the comparative evaluation of social risk distributions, which are probability distributions over the potential sets of fatalities. In the approach to the evaluation of social risk equity introduced by Gajdos, Weymark, and Zoli (Shared destinies and the measurement of social risk equity, Annals of Operations Research 176:409–424, 2010), the only information about such a distribution that is used in the evaluation is that contained in a shared destiny risk matrix whose entry in the \(k\)th row and \(i\)th column is the probability that person i dies in a group containing k individuals. Such a matrix is admissible if it satisfies a set of restrictions implied by its definition. It is feasible if it can be generated by a social risk distribution. It is shown that admissibility is equivalent to feasibility. Admissibility is much easier to directly verify than feasibility, so this result provides a simple way to identify which matrices to consider when the objective is to socially rank the feasible shared destiny risk matrices.
Feasible Shared Destiny Risk Distributions
Thibault Gajdos, John A. Weymark, and Claudio Zoli
January 2018
Abstract Social risk equity is concerned with the comparative evaluation of social
risk distributions, which are probability distributions over the potential sets of fatal-
ities. In the approach to the evaluation of social risk equity introduced by Gajdos,
Weymark, and Zoli (Shared destinies and the measurement of social risk equity, An-
nals of Operations Research 176:409–424, 2010), the only information about such
a distribution that is used in the evaluation is that contained in a shared destiny risk
matrix whose entry in the kth row and ith column is the probability that person i
dies in a group containing kindividuals. Such a matrix is admissible if it satisfies a
set of restrictions implied by its definition. It is feasible if it can be generated by a
social risk distribution. It is shown that admissibility is equivalent to feasibility. Ad-
missibility is much easier to directly verify than feasibility, so this result provides a
simply way to identify which matrices to consider when the objective is to socially
rank the feasible shared destiny risk matrices.
Keywords social risk evaluation, social risk equity, public risk, shared destinies
JEL classification numbers D63, D81, H43.
Thibault Gajdos
CNRS and Laboratoire de Psychologie Cognitive, Aix-Marseille University, Bˆ
atiment 9 Case D, 3
place Victor Hugo, 13331 Marseille Cedex 3, France
e-mail: thibault.gajdos@univ-amu.fr
John A. Weymark
Department of Economics, Vanderbilt University, VU Station B #35189, 2301 Vanderbilt Place,
Nashville, TN 37235-1819, USA
e-mail: john.weymark@vanderbilt.edu
Claudio Zoli
Department of Economics, University of Verona, Via Cantarane 24, 37129 Verona, Italy
e-mail: claudio.zoli@univr.it
1
2 T. Gajdos. J. A. Weymark, and C. Zoli
1 Introduction
Governments routinely implement policies that affect the risks that a society faces.
For example, barriers are installed to lessen the risk of a terrorist driving a vehicle
into pedestrians, dikes are built to reduce the risk of flooding, and carbon taxes
are imposed to slow down the rise in the temperature of the Earth’s atmosphere so
as to reduce the likelihood of the serious harms that result from climate change.
Policies differ in the degree to which they change the expected aggregate amount of
a harm and how it is distributed across the population. A consequentialist approach
to evaluating the relative desirability of different policies that affect these kinds of
social risks does so by ranking the possible distributions of the resulting harms. If
this ranking, or an index representing it, takes account of the equity of the resulting
distribution of risks, it is a measure of social risk equity.
The measurement of social risk equity has its origins in the work of Keeney
(1980a,b,c). The analysis of social risk equity has been further developed by Broome
(1982), Fishburn (1984), Fishburn and Sarin (1991), Fishburn and Straffin (1989),
Gajdos et al (2010), Harvey (1985), Keeney and Winkler (1985), and Sarin (1985),
among others. While these analyses apply to any kind of social harm, for the most
part, the harm that they consider is death. Our analysis also applies to any socially
risky situation in which a harm may affect some, but not necessarily all, of the
society in question but, for concreteness, we, too, suppose that this harm is death.
The set of individuals who die as a result of their exposure to the risk is a fatality
set and a social risk distribution is a probability distribution over all of the possible
fatality sets. Social risk distributions are ranked using a social risk equity preference
ordering. Not all of the information about a social risk distribution may be regarded
as being relevant when determining the social preference relation. For example,
Fishburn and Straffin (1989), Keeney and Winkler (1985), and Sarin (1985) only
take account of the risk profiles for individuals and for fatalities. The former lists the
likelihoods of each person dying, whereas the latter is the probability distribution
over the number of fatalities. These statistics can be computed from a social risk
distribution, but in doing so, some information is lost.
Gajdos et al (2010) propose also taking into account a concern for shared des-
tinies; specifically, with the number of other individuals with whom someone per-
ishes. Chew and Sagi (2012) describe this concern as being one of ex post fairness.
For example, for a given probability of there being kfatalities, it might be socially
desirable to have this risk spread more evenly over the individuals. As Example 3
in Gajdos et al (2010) demonstrates, it is possible for the distribution of how many
people someone dies with to differ in two social risk distributions even though the
risk profiles for individuals and for fatalities are the same in both distributions. As a
consequence, a concern for shared destinies cannot be fully captured if one restricts
attention to the information provided by the likelihoods of each person dying and
the probability distribution over the number of fatalities.1
1There are other dimensions of social risk equity that may be of concern, such as dispersive equity
and catastrophe avoidance. There is a concern for dispersive equity if account is taken of individual
Feasible Shared Destiny Risk Distributions 3
The distribution of shared destiny risks can be expressed using a shared destiny
risk matrix whose entry in the kth row and ith column is the probability that person
idies in a group containing kindividuals. In the approach developed by Gajdos
et al (2010), if two social risk distributions result in the same shared destiny risk
matrix, they are regarded as being socially indifferent. Because the risk profiles for
individuals and for fatalities can be computed from the information contained in a
shared destiny risk matrix, their approach to evaluating social risks can take account
of these two risk profiles, not just a concern for shared destinies. In effect, in their
approach to social risk evaluation, ranking social risk distributions is equivalent to
ranking shared destiny risk matrices.
The entries of a social destiny risk matrix are probabilities, and so all lie in the
interval [0,1]. There are three other independent properties that such a matrix must
necessarily satisfy as a matter of definition: (i) nobody’s probability of dying can
exceed 1, (ii) the probability that there are a positive number of fatalities cannot
exceed 1, and (iii) nobody can have a probability of dying in a group of size kthat
exceeds the probability that there are kfatalities. A social destiny risk matrix that
satisfies these properties is said to be admissible. Starting with a social risk distribu-
tion, we can compute the entries in the corresponding shared destiny risk matrix. A
shared destiny risk matrix that can be generated in this way from a social risk dis-
tribution is said to be feasible. A feasible shared destiny risk matrix is necessarily
admissible. The question we address is whether there are admissible shared destiny
risk matrices that are not feasible. We show that there are not. Thus, a shared destiny
risk matrix is admissible if and only if it is feasible.
In order to establish this result, we develop an algorithm that shows how to con-
struct a social risk distribution from a shared destiny risk matrix in such a way that
the resulting distribution can be used to generate the matrix. It is easy to determine
if a shared destiny risk matrix is admissible but, as our algorithm makes clear, con-
firming that it is also feasible by finding a social risk distribution that generates it
may be a formidable undertaking. However, if our objective is only to socially rank
the feasible shared destiny risk matrices, our result tells us that this is equivalent
to socially ranking the admissible shared destiny risk matrices. We do not need to
know how to generate these matrices from social risk distributions in order to know
that they are feasible; we only need to know that they are admissible.
In Section 2, we introduce the formal framework used in our analysis. The algo-
rithm employed to determine a social risk distribution that generates a given admis-
sible shared destiny risk matrix is presented in Section 3. We illustrate the operation
of this algorithm in Section 4. We prove that a shared destiny risk matrix is admis-
sible if and only if it is feasible in Section 5.
characteristics such as gender, race, or geographic location in addition to the individuals’ exposures
to social risks. See Fishburn and Sarin (1991) for an analysis of the evaluation of social risks that
allows for dispersive equity. Bommier and Zuber (2008), Fishburn (1984), Fishburn and Straffin
(1989), Harvey (1985), and Keeney (1980a) consider social preferences for catastrophe avoidance.
We do not examine dispersive equity or catastrophe avoidance here.
4 T. Gajdos. J. A. Weymark, and C. Zoli
2 Shared Destiny Risk Matrices
There is a society of n2 individuals individuals who face a social risk. Let
N={1,...,n}be the set of these individuals. A fatality set is a subset SNcon-
sisting of the set of individuals who ex post die as a consequence of the risk that
this society faces. There are 2npossible fatality sets, including (nobody dies) and
N(everybody dies). A social risk distribution is a probability distribution pon 2n,
with p(S)denoting the ex ante probability that the fatality set is S. We suppose that
only this probability distribution is relevant for the purpose of social risk evaluation.
The set of all such probability distributions is P.
For each kN, let M(k)[0,1]ndenote the vector whose ith component M(k,i)
is the ex ante probability that person iwill die when there are exactly kfatalities. A
shared destiny risk matrix is an n×nmatrix Mwhose kth row is M(k). Let ¯n(k)be
the number of positive entries in M(k). The risk profile for individuals is the vector
α[0,1]n, where
α(i) =
n
k=1
M(k,i),iN,(1)
which is the ex ante probability that person iwill die. The risk profile for fatalities
is the vector β[0,1]n, where
β(k) = 1
k
n
i=1
M(k,i),kN,(2)
which is the ex ante probability that there will be exactly kfatalities. Note that a risk
profile for fatalities does not explicitly specify the probability that nobody dies. The
probability that there are no fatalities is 1 n
k=1β(k).
By definition, each of the entries of Mis a probability and so must lie in the
interval [0,1]. Hence, each of components of αand βmust be nonnegative as they
are sums of entries in M. There are three other restrictions on M. They are
α(i)1,iN,(3)
n
k=1
β(k)1,kN,(4)
and
M(k,i)β(k),(k,i)N2.(5)
The first of these requirements is that no person can die with a probability greater
than 1. The second is that the probability that there are a positive number of fatalities
cannot exceed 1. The third is that nobody’s probability of dying in a group of size k
can exceed the probability of there being kfatalities. Of course, it must also be the
case that
β(k)1,kN.(6)
Feasible Shared Destiny Risk Distributions 5
That is, the probability that there are a particular number of fatalities cannot exceed
1. However, (6) follows from (4) because all probabilities are nonnegative. A social
risk equity matrix Mis admissible if it satisfies (3), (4), and (5).
It is obvious that Mmust satisfy (3) and (4), but the necessity of (5) is less so
because there is more than one way that someone can die with k1 other individuals
when k>1. To see why (5) is required, suppose, on the contrary, that M(k,i)>
β(k)for some iN. Then, because M(k,i)is the probability that person iperishes
with k1 other individuals, it must be the case that j6=iM(k,j)>(k1)β(k)).
Hence, n
i=1M(k,i)>kβ(k). It then follows that β(k) = 1
kn
i=1M(k,i)>β(k), a
contradiction.
For each kN,T(k) = {S2n||S|=k}is the set of subgroups of the society
in which exactly kindividuals die. For each (k,i)N2,S(k,i) = {ST(k)|iS}
is the set of subgroups in which exactly kpeople die and iis one of them. A shared
destiny risk matrix Mis feasible if there exists a social risk distribution pPand
an n×nmatrix Mpsuch that M(k,i) = Mp(k,i), where Mp(k,i) = SS(k,i)p(S).
That is, Mp(k,i)is the probability that there are kdeaths and iis one of them when
the social risk distribution is p.
3 The Decomposition Algorithm
By construction, for any pP,Mpis an admissible shared destiny risk matrix. In
other words, any feasible shared destiny risk matrix is admissible. The question then
arises as to whether feasibility imposes any restrictions on Mother than that it be
admissible. We show that it does not.
For any admissible shared destiny risk matrix M, we need to show that there
exists a social risk distribution pPsuch that Mp=M. This is done by considering
each value of kseparately. For each kN, we know that the probability of having
this number of fatalities is β(k). We need to distribute this probability among the
subgroups in T(k)(the subgroups for which there are kfatalities) in such a way
that the probability that person idies in a group of size kis M(k,i). The resulting
probabilities for the subgroups in T(k)is called a probability decomposition. Put
another way, for each iN, we need to distribute the probability M(k,i)among
the subgroups in S(k,i)(the subgroups containing person ifor which there are k
fatalities) in such a way that the amount πSallocated to any SS(k,i)is the same
for everybody in this group. The value πSis then the probability that the set of
individuals who perish is S.
If M(k,i) = 0 for all iN(so ¯n(k) = 0), then β(k) = 0, so we assign probability
0 to each ST(k). If k=n, only Nis in T(n), so no decomposition is needed;
Nis simply assigned the probability β(n). When β(k)6=0 and k<n, we construct
an algorithm that produces the requisite probability decomposition. The algorithm
proceeds through a number of steps, which we denote by t=0,1,2, . . . . We show
that the algorithm terminates in no more than ¯n(k)steps. The relevant variables in
each step are distinguished using a superscript whose value is the step number.
6 T. Gajdos. J. A. Weymark, and C. Zoli
The vector ˆ
M(k)is a nonincreasing rearrangement of M(k)if ˆ
M(k,i)ˆ
M(k,i+
1)for all k=1,...,n1. Whenever a vector of probabilities for the nindividuals
is rearranged in this way, ties are broken in such a way that the original order of
the individuals is preserved. For example, if n=3, in the rearrangement (2,1,1)
of (1,2,1), the first 1 is associated with person 1 and the second 1 with person 3.
Without loss of generality, we suppose that M(k)is initially ranked in nonincreasing
order. We now describe our algorithm.
Probability Decomposition Algorithm. The initial values of the relevant variables
are
M0(k) = ˆ
M0(k) = M(k) = ˆ
M(k)
and
β0(k) = β(k).
Step 1. In Step 1, we assign a probability π1to the first kindividuals, which is the
set of individuals with the khighest probabilities in ˆ
M0(k). After π1is subtracted
from each of the first kcomponents of ˆ
M0(k), we are left with the fatality probability
β1(k) = β0(k)π1
to distribute among the groups of size kusing the probabilities in
M1(k) = ˆ
M0(k)(π1,...,π1,0,...,0).
Letting ρ0(k,i)denote the rank of individual iin ˆ
M0(k), we define the vector
π1(k) = π1·(I0
1,I0
2,...,I0
n),
where I0
i=1 if ρ0(k,i)kand I0
i=0 otherwise. Using π1(k),M1(k)can be equiv-
alently written as
M1(k) = M0(k)π1(k).
We need to ensure that each of the probabilities in M1(k)is nonnegative. Be-
cause ˆ
M0(k)is a nonincreasing rearrangement of M0(k)and ˆ
M1(k,i) = ˆ
M0(k,i)for
i=k+1,...,n, it must therefore be the case that π1ˆ
M0(k,k). We also need to
ensure that none of these probabilities exceeds the fatality probability β1(k)left
to distribute. This condition is satisfied by construction for the first kindividuals.
Hence, because ˆ
M0(k)is a nonincreasing rearrangement of M0(k), in order to sat-
isfy this condition, it is only necessary that ˆ
M0(k,k+1)β1(k) = β0(k)π1. Both
of these requirements are satisfied by setting
π1=min{ˆ
M0(k,k),β0(k)ˆ
M0(k,k+1)}.
By (5), ˆ
M0(k,k)β0(k). Therefore, π1β0(k)and, hence, β1(k)β0(k). Be-
cause β1(k) = 1
kn
i=1M1(k,i)and M1(k,i)0 for all iN,β1(k)0.
Let S1denote the first kindividuals in ˆ
M0(k). We choose p(S1)to be π1.
Feasible Shared Destiny Risk Distributions 7
If M1(k) = (0,0,0,...,0), the algorithm terminates. Otherwise, it proceeds to the
next step.
Step t (t2). The operation of the algorithm in this step follows the same basic
logic as in Step 1. The value of πtis chosen by setting
πt=min{ˆ
Mt1(k,k),βt1(k)ˆ
Mt1(k,k+1)}.(7)
Letting ρt1(k,i)denote the rank of individual iin ˆ
Mt1(k), we define the vector
πt(k) = πt·(It1
1,It1
2,...,It1
n),(8)
where It1
i=1 if ρt1(k,i)kand It1
i=0 otherwise.
We define Mt(k)and βt(k)by setting
Mt(k) = Mt1(k)πt(k)(9)
and
βt(k) = βt1(k)πt.(10)
Analogous reasoning to that used in Step 1 shows that 0 βt(k)βt1(k).
Let Stdenote the first kindividuals in ˆ
Mt1(k). We choose p(St)to be πt.
If Mt(k) = (0,0,0,...,0), the algorithm terminates. Otherwise, it proceeds to the
next step.
If the algorithm terminates and a group Swith kmembers has not been assigned
a probability by the algorithm, we set p(S) = 0.
4 Examples of the Probability Decomposition Algorithm
The operation of the probability decomposition algorithm is illustrated with three
examples. In each of these examples, it is assumed that Mis admissible. In the first
example, the algorithm is applied to the case in which nobody dies with anybody
else.
Example 1. Let k=1 with M(1,i)>0 for some iN. If Mis feasible, for each
iN, we must have p({i}) = M(1,i). We show that the algorithm produces this
result. We first consider the case in which M(1,i)>0 for all iN.
In Step 1, person 1 is the highest ranked individual in ˆ
M0(1). Therefore, we
have β0(1)ˆ
M0(1,2) = 1
1n
i=1ˆ
M0(1,i)ˆ
M0(1,2) = n
i6=2ˆ
M0(1,i)ˆ
M0(1,1). It
then follows that π1=ˆ
M0(1,1)and, hence, p({1}) = ˆ
M0(1,1) = M(1,1). We
have π1(1) = (M(1,1),0,...,0)and so M0(1)is now replaced with M1(1) =
(0,M0(2),...,M0(n)). Because M1(1,1) = 0, person 1 is never considered again
by the algorithm. There is fatality probability β1(1) = β0(1)π1=n
i=1M(1,i)
M(1,1) = n
i=2M(1,i)left to allocate.
Step 2 uses the vector ˆ
M1(1)Person 2 is the second highest ranked individual
in M0(1)and so is first ranked in ˆ
M1(1). As in Step 1, π2=ˆ
M1(1,1)and, hence,
8 T. Gajdos. J. A. Weymark, and C. Zoli
p({2}) = ˆ
M1(1,1) = M(1,2). We have π1(1)=(0,M(1,2),0,...,0)and so M1(1)
is replaced with M2(1)=(0,0,M0(3),...,M0(n)) and person 2 is never considered
again. There is fatality probability β2(1) = β1(1)π2=n
i=2M(1,i)M(1,2) =
n
i=3M(1,i)left to allocate.
More generally, person iNis singled out in Step iand assigned the probability
p({i}) = M(1,i). In Step n,Mn(1) = (0,0,0,...,0), and so the algorithm terminates.
If M(1,i) = 0 for some iN, then the algorithm proceeds as above but terminates
in Step ¯n(1)<n, where it is recalled that ¯n(1)is the number of individuals for whom
M(1,i)is positive. For any individual ifor whom M(1,i) = 0, when the algorithm
terminates, p({i})is set equal to 0.
In the next two examples, individuals do not die alone. In these examples, all
probabilities are expressed in terms of percentages, so, for example, 5 is the proba-
bility 0.05.
Example 2. Let n=7 and k=4. We suppose that M0(4) = M(4) = ˆ
M(4) = ˆ
M0(4) =
(5,4,4,4,4,2,1). Consequently, β0(4) = β(4) = 1
4n
i=1M(4,i) = 6.
Step 1. We have π1=min{ˆ
M0(4,4),β0(4)ˆ
M0(4,5)}=2. Therefore, π1(4) =
(2,2,2,2,0,0,0),M1(4) = M0(4)π1(4)=(3,2,2,2,4,2,1), and β1(4) = β0(4)
π1=62=4. Hence, p({1,2,3,4}) = π1=2.
Step 2. There are four individuals with the third highest probability in M1(4). Us-
ing our tie-breaking rule, individuals 1, 2, 3, and 5 have the first four probabilities in
ˆ
M1(4). We thus have ˆ
M1(4)=(4,3,2,2,2,2,1), so π2=min{ˆ
M1(4,4),β1(4)
ˆ
M1(4,5)}=2. Therefore, π2(4) = (2,2,2,0,2,0,0),M2(4) = M1(4)π2(4) =
(1,0,0,2,2,2,1), and β2(4) = β1(4)π2=42=2. Hence, p({1,2,3,5}) =
π2=2.
Step 3. There are two individuals with the fourth highest probability in M2(4).
The tie is broken in favour of person 1, so the first four individuals in ˆ
M2(4)are 1,
4, 5, and 6. We have ˆ
M2(4)=(2,2,2,1,1,0,0), so π3=min{ˆ
M2(4,4),β2(4)
ˆ
M2(4,5)}=1. Therefore, π3(4) = (1,0,0,1,1,1,0),M3(4) = M2(4)π3(4) =
(0,0,0,1,1,1,1), and β3(4) = β2(4)π3=21=1. Hence, p({1,4,5,6}) =
π3=1.
Step 4. The four individuals with the highest probabilities in M3(4)are 4,
5, 6, and 7. We have ˆ
M3(4)=(1,1,1,1,0,0,0), so π4=min{ˆ
M3(4,4),β3(4)
ˆ
M3(4,5)}=1. Therefore, π4(4) = (0,0,0,1,1,1,1),M4(4) = M3(4)π4(4) =
(0,0,0,0,0,0,0), and β4(4) = β3(4)π4=11=0. Hence, p({4,5,6,7}) =
π4=1.
Because M4(4) = (0,0,0,0,0,0,0), the algorithm terminates in Step 4. The four
groups identified in Steps 1–4 are assigned positive probability. For any other set of
individuals Swith four members, p(S) = 0. There are n!
k!(nk)!=7!
4!3! =35 possible
groups of of this size, so 31 of them are assigned a zero probability.
In Examples 1 and 2, in Step t,πtis set equal to ˆ
Mt(k,k). In Example 3, it is
instead sometimes set equal to βt1(k)ˆ
Mt1(k,k+1).
Example 3. Let n=3 and k=2. We suppose that M0(2) = M(2) = ˆ
M(2) = ˆ
M0(2) =
(7,5,4). Consequently, β0(2) = β(2) = 1
2n
i=1M(2,i) = 8.
Feasible Shared Destiny Risk Distributions 9
Step 1. We have π1=min{ˆ
M0(2,2),β0(2)ˆ
M0(2,3)}=4. Therefore, π1(2) =
(4,4,0),M1(2) = M0(2)π1(2) = (3,1,4), and β1(2) = β0(2)π1=84=4.
Hence, p({1,2}) = π1=4.
Step 2. The two individuals with the highest probabilities in M1(2)are 1 and 3.
We have ˆ
M1(2) = (4,3,1), so π2=min{ˆ
M1(2,2),β1(2)ˆ
M1(2,3)}=3. There-
fore, π2(2) = (3,0,3),M2(2) = M1(2)π2(2) = (0,1,1), and β2(2) = β1(2)
π2=43=1. Hence, p({1,3}) = π2=3.
Step 3. There only two individuals (2 and 3) left with positive probabilities, and
these probabilities are the same. Hence, this group of individuals must be assigned
the unallocated fatality probability, so p({2,3}) = 1. We confirm that the algorithm
produces this result. We have ˆ
M2(2) = (1,1,0), so π3=min{ˆ
M2(1,2),β2(2)
ˆ
M2(2,2)}=1. Therefore, π3(1) = (0,1,1),M3(2) = M2(2)π3(2) = (0,0,0), and
β3(2) = β2(2)π3=11=0. Hence, p({2,3}) = 1, as was to be shown.
The algorithm terminates in Step 3. All subgroups of with two members are
assigned positive probability.
As these examples illustrate, each step of the algorithm identifies a subgroup with
kmembers and determines the probability that it is this group that perishes. For each
individual iin this group, this probability must be subtracted from whatever part of
the probability M(k,i)that remains unallocated at the end of the previous step. In
all three of the examples, at the end of the penultimate step of the algorithm, there is
a group of size kwhose members all have the same probability left to distribute. In
the next section, we show that this is a general feature of the algorithm. When this
amount has been allocated as the probability of this group perishing together, we
have Mt(k)=(0,...,0), and so the algorithm terminates because, for each iN,
the probability M(k,i)that person idies in a group of size khas been distributed
among each of the groups of size kthat include i.
The distribution of the probability in β(k)across the groups with kmembers need
not be unique. This is the case in Example 2 because there is more than one way
to rearrange the vector of fatality probabilities being considered in a nonincreasing
way in some of the steps. For example, if in Step 2 in this example, with the tie-
breaking rule used in our algorithm, individuals 1, 2, 3, and 5 are regarded as having
the four highest probabilities in M1(4). However, we could have used a tie-breaking
rule that selects individuals 1, 2, 5, and 6 instead, in which case p({(1,2,5,6)})>0,
which is not the case with the tie-breaking rule used in the algorithm. Feasibility of
a shared destiny risk matrix Monly requires that there there exists a social risk
distribution psuch that Mp=M, not that this distribution be unique.
5 The Equivalence of Admissibility and Feasibility
In order to show that an admissible shared destiny risk matrix is feasible, we first
establish a number of lemmas that identify some important properties of the proba-
bility decomposition algorithm. In each of our lemmas, we suppose that k6=nand
that the probability decomposition algorithm is being applied to the kth row M(k)
10 T. Gajdos. J. A. Weymark, and C. Zoli
of an admissible shared destiny risk matrix Mfor which the probability β(k)that
there are kfatalities is positive.
Lemma 1 shows that in each step of this algorithm, analogues of (2) and the
admissibility restriction in (5) hold.
Lemma 1. In any Step t of the algorithm,
βt(k) = 1
k
n
i=1
Mt(k,i)(11)
and
0Mt(k,i)βt(k),iN.(12)
Proof. For any k6=n, at the end of Step t1 of the algorithm, from the probability
β(k)that there will be exactly kfatalities, there is still βt1(k)left to allocate. In
Step t,πtis subtracted from the first kcomponents of ˆ
Mt1(k)and 0 from the other
nkcomponents. Hence, by (2), (9) and (10), at the end of Step t, the amount from
β(k)left to allocate is (11).
Because πtˆ
Mt1(k,k),Mt(k,i)0 for all iN. The argument used to show
that Mt(k,i)βt(k)for all iNis the same as the argument used in Section 2 to
show that (5) holds but with Mt(k,i)substituting for M(k,i)and βt(k)substituting
for β(k).ut
In order for the probability decomposition algorithm to distribute all of the prob-
ability β(k)that there are kfatalities among the subgroups of size k, the algorithm
must terminate in a finite number of steps. Lemma 2 shows that this is the case if
the algorithm reaches a step in which there are kpositive entries left to distribute.
Lemma 2. The algorithm terminates in Step t +1if there are k positive entries in
Mt(k).
Proof. By Lemma 1, (11) and (12) hold. If Mt(k)contains kpositive entries, (11)
and (12) imply that they are all equal to βt(k). Thus, the algorithm terminates in the
next step because πt=βt(k)is subtracted from ˆ
Mt(k,i)for each i=1,...,k, and so
Mt+1(k) = (0,...,0).ut
There are ¯n(k)individuals who have a positive probability of dying in a group of
size k. Lemma 3 shows that the probability decomposition algorithm terminates in
a finite number of steps that does not exceed this value.
Lemma 3. The algorithm terminates in at most ¯n(k)steps.
Proof. If k=¯n(k), then Lemma 2 applies with t=0, so the algorithm terminates in
Step 1.
Now, suppose that k<¯n(k). From (7), we know that in Step tof the algorithm, πt
is either ˆ
Mt1(k,k)or βt1(k)ˆ
Mt1(k,k+1), whichever is smallest. We consider
two cases distinguished by whether the first of these possibilities holds for all tor
not.
Feasible Shared Destiny Risk Distributions 11
Case 1. For each Step tof the algorithm, πt=ˆ
Mt1(k,k). Then, by (7)–(9), Mt(k)
has at least one more 0 entry than Mt1(k). Thus, Mt(k)has at least n¯n(k) + t
entries equal to 0 and, hence, has at most kpositive entries in Step ¯n(k)k. It
follows from (11) and (12) (which hold by Lemma 1) that there is no Step tsuch
that the number of positive entries in Mt(k)is positive but less than k. Therefore,
because the algorithm subtracts a common positive amount of probability from k
individuals in each step, for some t¯n(k)k,Mt(k)has exactly kpositive entries,
which, by Lemma 2, implies that the algorithm terminates in at most ¯n(k)k+1
steps. Because k<¯n(k), this upper bound is at most ¯n.
Case 2. In some Step tof the algorithm, πt6=ˆ
Mt1(k,k). Let tbe the first step
for which this is the case. By (7), we then have that πt=βt1(k)ˆ
Mt1(k,k+1).
Let ithe individual for whom ρt1(k,i) = k+1. That is, iis the individual for
whom Mt1(k,i) = ˆ
Mt1(k,k+1). Because πt=βt1(k)ˆ
Mt1(k,k+1),
by (10), βt(k) = ˆ
Mt1(k,k+1). Because Mt(k,i) = Mt1(k,i), it follows that
Mt(k,i) = βt(k).
By (11) and (12), there cannot be more that kentries in Mt(k)which are at least
as large as βt(k). Hence, imust occupy one of the first kranks in Mt(k)and so
i’s probability is reduced by πtin Step t. By (10), for all t,πt=βt1(k)βt(k).
Therefore, Mt+1(k,i) = βt+1(k). Iteratively applying the same reasoning in each
of the subsequent non-terminal steps of the algorithm, we conclude that Mτ(k,i) =
βτ(k)for any Step τfor which τtwhich is not a terminal step.
Because there cannot be more that kentries in Mτ(k)which are at least as large
as βτ(k), we now know that for each τt,ihas a rank not exceeding kin Mτ(k).
Hence, in any Step t∗∗ for which t∗∗ >t, the individual who occupies rank k+1
in ˆ
Mt∗∗1(k)is someone, say i∗∗ , who is different from i. Reasoning as above, if
πt∗∗ 6=ˆ
Mt∗∗1(k,k), then Mτ(k,i∗∗ ) = βτ(k)for any Step τfor which τt∗∗ which
is not a terminal step. Furthermore, both iand i∗∗ have ranks not exceeding kin
Mτ(k)for any such τ.
By an iterative application of the preceding argument, we conclude that there can
be at most ksteps in which πt6=ˆ
Mt1(k,k). Because Mt(k)has at least one more
0 entry than Mt1(k)in each Step tfor which πt=ˆ
Mt1(k,k), there are at most
¯n(k)k1 values of tfor which (i) πt=ˆ
Mt1(k,k)and (ii) there are at least k+1
positive entries in Mt(k). Thus, the algorithm terminates in at most ¯n(k)steps. ut
In each step of the algorithm, a group of size kis identified and assigned a prob-
ability. Lemma 4 shows that no group is considered in more than one step of the
algorithm and, therefore, no group is assigned more than one probability.
Lemma 4. No group of individuals with k members is assigned a probability in more
than one step of the algorithm.
Proof. We need to show that for all Steps tand t0of the algorithm for which t6=t0,
(It
1,It
2,...,It
n)6= (It0
1,It0
2,...,It0
n). On the contrary, suppose that there exist t<t0for
which (It
1,It
2,...,It
n) = (It0
1,It0
2,...,It0
n). Let Sbe the set of individuals for whom the
value of these indicator functions is 1. Because both πtand πt0are positive, by (7),
we must have πtπt+πt0, which is impossible. That is, both πtand πt0must be
12 T. Gajdos. J. A. Weymark, and C. Zoli
subtracted in the same step from the probabilities of the members of Sthat have yet
to be allocated when this group is the one being considered. ut
With these lemmas in hand, we can now prove our equivalence theorem.
Theorem. A shared destiny risk matrix M is admissible if and only it is is feasible.
Proof. Because a feasible shared destiny risk matrix is necessarily admissible, we
only need to show the reverse implication. Suppose that Mis an admissible shared
destiny risk matrix. For each k6=nfor which β(k)>0, Lemmas 3 and 4 im-
ply that the probability decomposition algorithm assigns a probability p(S)[0,1]
to each group ST(k)(the set of groups with kmembers) in such a way that
ST(k)p(S) = β(k). If k6=nand β(k) = 0, we let p(S) = 0 for all ST(k). Be-
cause T(n) = {N}, we set p(N) = β(n). Finally, we set p() = 1n
i=1β(k).
The function p: 2n[0,1]is therefore a social risk distribution. By construc-
tion, the corresponding shared destiny risk matrix Mpis the same as Mbecause
Mp(k,i) = SS(k,i)p(S) = M(k,i)for all (k,i). Hence, Mis feasible. ut
References
Bommier A, Zuber S (2008) Can preferences for catastrophe avoidance reconcile social discount-
ing with intergenerational equity? Social Choice and Welfare 31:415–434
Broome J (1982) Equity in risk bearing. Operations Research 30:412–414
Chew SH, Sagi J (2012) An inequality measure for stochastic allocations. Journal of Economic
Theory 147:1517–1544
Fishburn PC (1984) Equity axioms for public risks. Operations Research 32:901–908
Fishburn PC, Sarin RK (1991) Dispersive equity and social risk. Management Science 37:751–769
Fishburn PC, Straffin PD (1989) Equity considerations in public risks evaluation. Operations Re-
search 37:229–239
Gajdos T, Weymark JA, Zoli C (2010) Shared destinies and the measurement of social risk equity.
Annals of Operations Research 176:409–424
Harvey CM (1985) Preference functions for catastrophe and risk inequity. Large Scale Systems
8:131–146
Keeney RL (1980a) Equity and public risk. Operations Research 28:527–534
Keeney RL (1980b) Evaluating alternatives involving potential fatalities. Operations Research
28:188–205
Keeney RL (1980c) Utility functions for equity and public risk. Management Science 26:345–353
Keeney RL, Winkler RL (1985) Evaluating decision strategies for equity of public risks. Operations
Research 33:955–970
Sarin RK (1985) Measuring equity in public risk. Operations Research 33:210–217
ResearchGate has not been able to resolve any citations for this publication.
Article
An equitable distribution of risk is often an important criterion in the evaluation of technological and environmental risks. The author defines the concept of equity precisely, and develops measures for ex-post and ex-ante equity. He then shows how these measures, combined with information on the number of possible fatalities, can be used to rank alternatives that involve risks to human life.
Article
A decision analysis model is developed for the policy issues of aversion toward catastrophe for a group of individuals and of aversion toward inequity in the risks faced by the individuals. Conditions on social preferences are shown to imply special types of preference functions that appear to be the simplest means of modeling these issues. Whereas in previous models the consequences of the policy alternatives are described by a two-value variable for the attribute of fatality, in this model the consequences can also be described by continuous variables for such attributes as cost and degree of injury. Refs.
Article
An equitable distribution of risk is often an important criterion in the evaluation of technological and environmental risks. In this paper, we define the concept of equity precisely, and develop measures for ex-post and ex-ante equity. We then show how these measures, combined with information on the number of possible fatalities, can be used to rank alternatives that involve risks to human life.
Article
Recent contributions by R.K. Sarin, and R.L. Keeney and R.L. Winkler, approach the problem of public risks evaluation from a multiattribute perspective in which fatalities, ex ante equity, and ex post equity play leading roles. The present paper considers the same problem from the perspective of holistic comparisons between potential fatality distributions. We consider a number of axioms for such comparisons, identify maximally compatible sets of axioms, and relate our approach to the multiattribute approach. We also identify a form of dispersive equity that is not discussed in recent work but deserves further consideration.
Article
From the perspective of a two-person society, we explain and analyze six axioms for equity among individuals who are exposed to the possibility of death from a public hazard. The axioms are concerned with how a given total risk (expected fatalities) is distributed over the four possible life/death consequences for the two people. Maximal subsets of mutually compatible equity axioms are identified first for a monotonic ordinal utility function and then for a more restrictive von Neumann-Morgenstern utility function.
Article
Von Neuman-Morgenstern utility functions are found incompatible with valuing fairness and lead to an invalid conclusion about risk proneness.
Article
A critical aspect of many major decisions involves the possible loss of human life. Thus, in evaluating the alternatives, it is desirable to address this issue. This paper proposes a cardinal utility model for evaluating the potential fatalities and the uncertainty of their occurrence. It separates the overall consequences of fatalities into a personal direct impact and a societal indirect impact. Separate preference models are built for each impact and integrated to provide an overall evaluation of the potential fatalities. An assessment procedure is discussed and illustrated.
Article
Possible fatalities to members of the public are defined in this paper as public risk. Given other things are equal, such as the benefits to individuals in society, there may be a preference for an equitable balancing of individual risks. This concept of equity is defined and it is shown that any utility function over number of fatalities which exhibits this equity condition must be risk prone. Commonly used indicators, such as the average risk per person and the expected number of fatalities, do not promote equity. An attitude of aversion toward catastrophes is defined and shown to conflict with risk equity.
Article
This paper defines a concept of ex ante risk equity to address the equity of the process and distinguishes this concept from the ex post risk equity of the fatalities. The paper indicates that an analyis of decision strategies addresses ex ante equity. In this context, both types of equity as well as loss of life are included in a von Neumann-Morgenstern utility model developed to evaluate public risks. This approach provides a method to investigate the implications of different value judgments in examining alternatives and illustrates an appropriate use of von Neumann-Morgenstern utility for problems involving social consequences such as public risks.