## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Article

We introduce some new voting rules based on a spatial version of the median known as the mediancentre, or Fermat-Weber point. Voting rules based on the mean include many that are familiar: the Borda Count, Kemeny rule, approval voting, etc. (see Zwicker (2008a,b)). These mean rules can be implemented by “voting machines” (interactive simulations of physical mechanisms) that use ideal rubber bands to achieve an equilibrium among the competing preferences of the voters. One consequence is that in any such rule, a voter who is further from consensus exerts a stronger tug on the election outcome, because her rubber band is more stretched.While the R1 median has been studied in the context of voting, mediancentre-based rules are new. Voting machines for these rules require that the tug exerted by a voter be independent of his distance from consensus; replacing rubber bands with weights suspended from strings provides exactly this effect. We discuss some novel properties exhibited by these rules, as well as a broader question suggested by our investigations—What are the critical relationships among resistance to manipulation, decisiveness, and responsiveness for a voting rule? We argue that a distorted view may arise from an exclusive focus on the first, without due attention to the other two.

To read the full-text of this research,

you can request a copy directly from the authors.

... Barthelemé and Monjardet [2] pioneered the application to voting theory of the two-step method we use for the characterization via ellipsoids -first convert Hamming distance to squared Euclidean distance in the hypercube, and then apply Huyghens' theorem on the mean. It has also been exploited in [30], [31], and [4]. This approach has the potential to transform any result that entails minimizing a sum of Hamming distances, and deserves to be better known. ...

... 3 However, a precise definition seems necessary if we wish to address two related issues: QUESTION 1 "Is there always some choice of voting weights that perfectly reflects influence?" QUESTION 2 "How can we choose voting weights for legislators in a representative assembly so that they appropriately reflect population differences among the districts represented?" 4 So, how should we measure the influence of a voter in a simple game? In [1], Banzhaf argues that we should count instances in which a voter is critical or decisive, swinging the outcome of the collective decision. ...

... 3 One alternatives is to measure influence with an interval of numbers; see [28]. 4 By "appropriately reflect population differences" we do not necessarily mean directly in proportion to population. If we view such a representative assembly as a two-tier voting rule in which individual citizens, when they elect their representatives, are in effect voting on the legislation itself, then there is an argument (which goes back to the original articles of Penrose and Banzhaf) that equalizing the influence of these citizens requires the voting powers of representatives to be in proportion to the square roots of their district populations. ...

Suppose legislators represent districts of varying population, and their assemblyʼs voting rule is intended to implement the principle of one person, one vote. How should legislatorsʼ voting weights appropriately reflect these population differences? An analysis requires an understanding of the relationship between voting weight and some measure of the influence that each legislator has over collective decisions. We provide three new characterizations of weighted voting that embody this relationship. Each is based on the intuition that winning coalitions should be close to one another. The locally minimal and tightly packed characterizations use a weighted Hamming metric. Ellipsoidal separability employs the Euclidean metric: a separating hyper-ellipsoid contains all winning coalitions, and omits losing ones. The ellipsoidʼs proportions, and the Hamming weights, reflect the ratio of voting weight to influence, measured as Penrose-Banzhaf voting power. In particular, the spherically separable rules are those for which voting powers can serve as voting weights.

... Until now, the two approaches described above have not been explicitly connected. Specific distance-based rules have indeed been studied in the simplex or permutahedron, notably by Zwicker (2008a, b) and Cervone et al. (2012). However, a more general approach is lacking. ...

... When using profiles as input, the simplex geometry is hard enough to visualize that some authors have used a fixed projection to the permutahedron and essentially used S as a consensus. The cases p = 2 (mean proximity rules) Zwicker (2008b), Lahaie and Shah (2014) and p = 1 (mediancenter rules) Cervone et al. (2012) have received attention. These can be interpreted in our framework by changing the distance-detailed formulae might be interesting. ...

The concept of distance rationalizability of voting rules has been explored in recent years by several authors. Roughly speaking, we first choose a consensus set of elections (defined via preferences of voters over candidates) for which the result is specified a priori (intuitively, these are elections on which all voters can easily agree on the result). We also choose a measure of distance between elections. The result of an election outside the consensus set is defined to be the result of the closest consensual election under the distance measure. Most previous work has dealt with a definition in terms of preference profiles. However, most voting rules in common use are anonymous and homogeneous. In this case there is a much more succinct representation (using the voting simplex) of the inputs to the rule. This representation has been widely used in the voting literature, but rarely in the context of distance rationalizability. We show exactly how to connect distance rationalizability on profiles for anonymous and homogeneous rules to geometry in the simplex. We develop the connection for the important special case of votewise distances, recently introduced and studied by Elkind, Faliszewski and Slinko in several papers. This yields a direct interpretation in terms of well-developed mathematical concepts not seen before in the voting literature, namely Kantorovich (also called Wasserstein) distances and the geometry of Minkowski spaces. As an application of this approach, we prove some positive and some negative results about the decisiveness of distance rationalizable anonymous and homogeneous rules. The positive results connect with the recent theory of hyperplane rules, while the negative ones deal with distances that are not metrics, controversial notions of consensus, and the fact that the \(\ell ^1\)-norm is not strictly convex. We expect that the above novel geometric interpretation will aid the analysis of rules defined by votewise distances, and the discovery of new rules with desirable properties.

... Until now, these two approaches have not been systematically connected. Specific distancebased rules have indeed been studied in the simplex or permutahedron, notably by Zwicker and coauthors [24,23,3]. However, a more general approach is lacking. ...

... When using profiles as input, the simplex geometry is hard enough to visualize that some authors have used a fixed projection to the permutahedron and essentially used S as a consensus. The cases p = 2 (mean proximity rules) [24,9] and p = 1 (mediancenter rules) [3] have received attention. These can be interpreted in our framework by changing the distance -detailed formulae might be interesting. ...

The concept of distance rationalizability of voting rules has been explored in recent years by several authors. Most previous work has dealt with a definition in terms of preference profiles. However, most voting rules in common use are anonymous and homogeneous. In this case there is a much more succinct representation (using the voting simplex) of the inputs to the rule. This representation has been widely used in the voting literature, but rarely in the context of distance rationalizability. Recently, the present authors showed, as a special case of general results on quotient spaces, exactly how to connect distance rationalizability on profiles for anonymous and homogeneous rules to geometry in the simplex. In this article we develop the connection for the important special case of votewise distances, recently introduced and studied by Elkind, Faliszewski and Slinko in several papers. This yields a direct interpretation in terms of well-developed mathematical topics not seen before in the voting literature, namely Kantorovich (also called Wasserstein) distances and the geometry of Minkowski spaces. As an application of this approach, we prove some positive and some negative results about the decisiveness of distance rationalizable anonymous and homogeneous rules. The positive results connect with the recent theory of hyperplane rules, while the negative ones deal with distances that are not metrics, controversial notions of consensus, and the fact that the $\ell^1$-norm is not strictly convex. We expect that the above novel geometric interpretation will aid the analysis of rules defined by votewise distances, and the discovery of new rules with desirable properties.

... 21 We denote with π i the number of individuals with preference p i , and with π the corresponding distribution. See Cervone et al. (2012) for a study on preference networks. 22 20 Common feature of networks (g , π ) and (g ,π ) is a type of "structural regularity." ...

We introduce a model of polarization in networks as a unifying framework for the measurement of polarization that covers a wide range of applications. We consider a sufficiently general setup for this purpose: node- and edge-weighted, undirected, and connected networks. We generalize the axiomatic characterization of Esteban and Ray (1994) and show that only a particular instance within this class can be used justifiably to measure polarization of networks.

... Baranchuk and Dybvig (2009) assumed Euclidean preferences and used the term 'consensus,' and they applied the concept to analyze decision making by a board of directors. Cervone et al. (2012) used the terminology of 'mediancentre' and 'Fermat-Weber point,' and they discussed computational issues and cited earlier work on the topic. Brady and Chambers (2015) used the term 'geometric median,' and assuming Euclidean preferences and a variable population, they showed that when individual preferences are Euclidean, the geometric median is the smallest rule that is Maskin monotonic and satisfies a number of background axioms; and Brady and Chambers (2016) assumed three individuals with Euclidean preferences, and they showed that the geometric median is the unique rule satisfying Maskin monotonicity, anonymity, and neutrality. ...

We propose the solution concept of directional equilibrium for the multidimensional model of voting with general spatial preferences. This concept isolates alternatives that are stable with respect to forces applied by all voters in the directions of their gradients, and it extends a known concept from statistics for Euclidean preferences. We establish connections to the majority core, Pareto optimality, and existence and closed graph, and we provide non-cooperative foundations in terms of a local contest game played by voters.

... Other works using the geometric median in economics or political science research includeCervone et al. (2012),Baranchuk and Dybvig (2009) andChung and Duggan (2014). In particular, the latter work describes an interesting generalization of the concept to general convex preferences. ...

In a spatial model with Euclidean preferences, we establish that the geometric median satisfies Maskin monotonicity, anonymity, and neutrality. For three agents, it is the unique such rule.

... Other works using the geometric median in economics or political science research includeCervone et al. (2012),Baranchuk and Dybvig (2009) andChung and Duggan (2014). In particular, the latter work describes an interesting generalization of the concept to general convex preferences. ...

In a spatial model with Euclidean preferences, we establish that the geometric median satisfies Maskin monotonicity, anonymity, and neutrality. For three agents, it is the unique such rule.

... See Chung and Duggan (2014) for a more general concept in the spatial model of voting. Cervone et al. (2012) investigate the geometric median in a preference aggregation framework. Finally, Baranchuk and Dybvig (2009) provide an application to corporate board consensus. ...

... However, using w k-INTERVAL vectors we can downplay these extreme scores and move more towards the median view of all the voters. Similar results were shown by Cervone et al. (2012) in their work on voting rules that use the mediancenter to aggregate preferences. ...

Positional scoring rules in voting compute the score of an alternative by summing the scores for the alternative induced by every vote. This summation principle ensures that all votes contribute equally to the score of an alternative. We relax this assumption and, instead, aggregate scores by taking into account the rank of a score in the ordered list of scores obtained from the votes. This defines a new family of voting rules, rank-dependent scoring rules (RDSRs), based on ordered weighted average (OWA) operators, which, include all scoring rules, and many others, most of which of new. We study some properties of these rules, and show, empirically, that certain RDSRs are less manipulate than Borda voting, across a variety of statistical cultures. Copyright © 2014, Association for the Advancement of Artificial Intelligence.

... Other works using the geometric median in economics or political science research include Cervone, Dai, Gnoutcheff, Lanterman, Mackenzie, Morse, Srivastava, and Zwicker (2012), Baranchuk and Dybvig (2009), and Chung and Duggan (2014). In particular, the latter work describes an interesting generalization of the concept to general convex preferences. ...

In a spatial model with Euclidean preferences, we establish
that the geometric median satisfi�es Maskin monotonicity, anonymity, and
neutrality. For three agents, it is the unique such rule.

We introduce a model of polarization in networks as a unifying setting for the measurement of polarization that covers a wide range of applications. We consider a substantially general setup for this purpose: node- and edge-weighted, undirected, and connected networks. We generalize the axiomatic characterization of Esteban and Ray (1994) and show that only a particular instance within this class can be used justifiably to measure polarization in networks.

This study focuses on improving the Weber model to obtain a more optimised solution, by which we can measure effectively the quality of the voting rules. By introducing the voter’s supporting domain and the candidate’s supported degree to depict the emotional factors of voters and the real voting network, we propose the concept of validity for a candidate. Upon the traditional Weber model, two improved models are presented and the corresponding global optimal candidate is used as the evaluation benchmark for the voting rules. The experiments show that the optimal solution of the two models has better robustness in complex voting networks, can be used as a standard to evaluate the voting rules. When the support degree of voter is the main factor, the Condorcet rule is optimal in most cases, and when the validity of the candidate is taken as the main factor, the Approval rule must be the best.

Much of my research deals with trying to evaluate the performance of social choice algorithms via simulations, which requires appropriate inputs and quality measures. All three areas offer substantial scope for improvement in the coming years. For concreteness and because of my own limited experience, I focus on the allocation of indivisible goods and on voting, although many of the ideas are more broadly applicable.

By using geometry, a fairly complete analysis of Kemeny's rule (KR) is obtained. It is shown that the Borda Count (BC) always ranks the KR winner above the KR loser, and, conversely, KR always ranks the BC winner above the BC loser. Such KR relationships fail to hold for other positional methods. The geometric reasons why KR enjoys remarkably consistent election rankings as candidates are added or dropped are explained. The power of this KR consistency is demonstrated by comparing KR and BC outcomes. But KR's consistency carries a heavy cost; it requires KR to partially dismiss the crucial "individual rationality of voters" assumption.

The problem of the manipulability of known social choice rules in the case of mul- tiple choice is considered. Several concepts of expanded preferences (preferences over the sets of alternatives) are elaborated. As a result of this analysis ordinal and nonordinal methods of preferences expanding are defined. The notions of the degree of manipulability are extended to the case under study. Using the results of theoretical investigation, 22 known social choice rules are studied via computational experiments to reveal their degree of manipulability.

In voting theory, analyzing the frequency of an event (e.g. a voting paradox), under some specific but widely used assumptions, is equivalent to computing the exact number of integer
solutions in a system of linear constraints. Recently, some algorithms for computing this number have been proposed in social
choice literature by Huang and Chua (Soc Choice Welfare 17:143–155 2000) and by Gehrlein (Soc Choice Welfare 19:503–512 2002;
Rev Econ Des 9:317–336 2006). The purpose of this paper is threefold. Firstly, we want to do justice to Eugène Ehrhart, who,
more than forty years ago, discovered the theoretical foundations of the above mentioned algorithms. Secondly, we present
some efficient algorithms that have been recently developed by computer scientists, independently from voting theorists. Thirdly,
we illustrate the use of these algorithms by providing some original results in voting theory.

Suppose that a vote consists of a linear ranking of alternatives, and that in a certain profile some single pivotal voter
v is able to change the outcome of an election from s alone to t alone, by changing her vote from P
v
to P¢v{P^\prime_{v}} . A voting rule F{\mathcal{F}} is two-way monotonic if such an effect is only possible when v moves t from below s (according to P
v
to above s (according to P¢v{P^\prime_{v}} . One-way monotonicity is the strictly weaker requirement forbidding this effect when v makes the opposite switch, by moving s from below t to above t. Two-way monotonicity is very strong—equivalent over any domain to strategy proofness. One-way monotonicity holds for all sensible voting rules, a broad class including the scoring rules, but no Condorcet extension for four or more alternatives is one-way
monotonic. These monotonicities have interpretations in terms of strategy-proofness. For a one-way monotonic rule F{\mathcal{F}} , each manipulation is paired with a positive response, in which F{\mathcal{F}} offers the pivotal voter a strictly better result when she votes sincerely.

In this paper, we consider the Coase theorem in a non cooperative game framework. In particular, we explore the robustness of the Coase theorem with respect to the ?nal distribution of alienable property rights which constitutes, as far as we know, a less cultivated ?eld of research. In our framework, in order to reach e� ciency, agents have to stipulate binding contracts. In the analysis, we distinguish between permanent and temporary contracts showing the di�erent implication of the two kinds of contracts with respect to the ?nal attribution of individual rights. More precisely, we show that, with temporary binding contracts and under particular assumptions, the ?nal attribution if individual rights does not converge.

The formal equivalence between social choice and statistical estimation means that criteria used to evaluate estimators can be interpreted as features of voting rules. The robustness of an estimator means, in the context of social choice, insensitivity to departures from majority opinion. In this paper, the authors consider the implications of substituting the median, a robust, high breakdown estimator, for Borda's mean. The robustness of the median makes the ranking method insensitive to outliers and reflect majority opinion. Among all methods that satisfy a majority condition, median ranks is the unique one that is monotonic. It is an attractive voting method when the goal is the collective assessment of the merits of alternatives. Copyright 1999 by Kluwer Academic Publishers

Condorcet’s Paradox has been formally studied by an amazing number of people in many different contexts for more than two centuries. Peter Fishburn introduced the basic notion of the Paradox to me in 1971 during a course in Social Choice Theory at Pennsylvania State University. My immediate response to seeing the simple example that he presented was that this phenomenon certainly could not be very likely to ever be observed in reality. Peter quickly suggested that I should work on developing some representations for the probability that the Paradox might occur, and very soon thereafter that pursuit began. It is only after 35 years of effort, with a lot of help from Peter, that I now feel that a good answer can be given to the challenge that was presented in that classroom in 1971. Many people have suggested to me over the years that a book like this should be completed, since the source material is spread over such a wide variety of disciplines of a- demic journals and books that it is very difficult for people to know what has been done, and has not been done, in this area of determining representations for the probability that Condorcet’s Paradox would ever be observed in reality.

Honesty in voting is not always the best policy. This is a book for mathematicians, political scientists, economists and philosophers who want to understand how it is impossible to devise a reasonable voting system in which voters can never gain by submitting a disingenuous ballot. The book requires no prerequisites except a willingness to follow rigorous mathematical arguments.

It is shown how simple geometry can be used to analyze and discover new properties about pairwise and positional voting rules as well as for those rules (e.g., runoffs and Approval Voting) that rely on these methods. The description starts by providing a geometric way to depict profiles, which simplifies the computation of the election outcomes. This geometry is then used to motivate the development of a "profile coordinate system," which evolves into a tool to analyze voting rules. This tool, for instance, completely explains various longstanding "paradoxes," such as why a Condorcet winner need not be elected with certain voting rules. A different geometry is developed to indicate whether certain voting "oddities" can be dismissed or must be taken seriously, and to explain why other mysteries, such as strategic voting and the no-show paradox (where a voter is rewarded by not voting), arise. Still another use of geometry extends McGarvey's Theorem about possible pairwise election rankings to identify the actual tallies that can arise (a result that is needed to analyze supermajority voting). Geometry is also developed to identify all possible positional and Approval Voting election outcomes that are admitted by a given profile; the converse becomes a geometric tool that can be used to discover new election relationships. Finally, it is shown how lessons learned in social choice, such as the seminal Arrow's and Sen's Theorems and the expanding literature about the properties of positional rules, provide insights into difficulties that are experienced by other disciplines.

26 known and new social choice rules are studied via computational experiments to reveal to which extent these rules are manipulable. 4 indices of manipulability are considered.

Computer enumeration techniques are used to find the range of weights for weighted scoring rules on three candidates that will maximize Condoreet efficiency for odd numbers of voters, up to 31. Voters’ preference rankings on candidates are generated from a Pólya-Eggenberger urn model. Results suggest that widely held notions regarding the overall superiority of Borda Rule, particularly regarding Condorcet efficiency, are highly dependent on an assumption of independence of voters’ preferences. With relatively low measures of dependence between voters’ preferences, reflecting social homogeneity, plurality rule is more Condoreet efficient than Borda Rule. Results contradict theoretical findings in Van Newenhizen (1992).

In a three-candidate election, a scoring rule ?, ??[0,1], assigns 1,? and 0 points (respectively) to each first, second and third place in the individual preference rankings. The Condorcet efficiency of a scoring rule is defined as the conditional probability that this rule selects the winner in accordance with Condorcet criteria (three Condorcet criteria are considered in the paper). We are interested in the following question: What rule ? has the greatest Condorcet efficiency? After recalling the known answer to this question, we investigate the impact of social homogeneity on the optimal value of ?. One of the most salient results we obtain is that the optimality of the Borda rule (?=1/2) holds only if the voters act in an independent way.

Consider an election in which each of the n voters casts a vote consisting of a strict preference ranking of the three candidates A, B, and C. In the limit as n ! 1, which scoring rule maximizes, under the assumption of Impartial Anonymous Culture (uniform probability distribution over profiles), the probability that the Condorcet candidate wins the election, given that a Condorcet candidate exists? We pro- duce an analytic solution, which is not the Borda Count. Our result agrees with recent numerical results from two independent studies, and contradicts a published result of Van Newenhizen (1992).

With three candidates and an odd number n of voters, let Q(n, λ, p) be the probability that the winning candidate under the point-total rule that assigns 1, λ, and 0 points respectively to each first, second, and third-place vote is the same as the simple majority candidate, given that there exists a simple majority candidate, when each voter independently selects a linear preference order on the candidates by a common probability distribution p on the six linear orders on the candidates. With Q(n, λ) = Q(n, λ, p) when p assigns probability to each order, the λ values that maximize Q(n, λ) for small n consist of open intervals in [0, ]. Using quadrivariate normals, a computational form is developed for the limiting probability Q(λ) = limn→∞Q(n, λ). The function Q(λ) = Q(1 − λ) for each λ ϵ [0, 1] and is differentiable with Q(λ) strictly increasing as λ goes from 0 to . The maximum value is approximately. 901189. Effects of nonuniform p distributions on Q(n, λ, p) are also discussed.

In this paper we survey a sequence of papers whose primary aim is the generalization of the concept of the median into higher dimensional settings. While a variety of distinct definitions of the median of a multivariate data set are possible these definitions have the common property of producing the usual definition when applied to univariate data or a univariate distribution. Some common ideas of equivariance, symmetry and breakdown are discussed as well as computational convenience for each definition. The extension of these ideas to directional statistics is also discussed. /// L'auteur revoit une série d'articles qui généralisent de plusieurs manières le concept de médiane dans les espaces multidimensionnels. Ces différentes mesures reproduisent la definition usuelle de la médiane dans le cas de loi unidimensionnelle. Il examine en outre les concepts d'équivariance, de symmétrie, de rupture ainsi que la difficulté de calcul de la médiane pour chaque definition. Enfin, il généralise ces idées pour les lois directionnelles.

What is a monotonicity property? How should such a property be recast, so as to apply to voting rules that allow ties in the outcome? Our original interest was in the second question, as applied to six related properties for voting rules: monotonicity, participation, one-way monotonicity, half-way monotonicity, Maskin monotonicity, and strategy-proofness. This question has been considered for some of these properties: by Peleg and Barberà for monotonicity, by Moulin and Pérez et al, for participation, and by many authors for strategy-proofness. Our approach, however, is comparative; we examine the behavior of all six properties, under three general methods for handling ties: applying a set extension principle (in particular, Gärdenfors’ sure-thing principle), using a tie-breaking agenda to break ties, and rephrasing properties via the “t-a-t” approach, so that only two alternatives are considered at a time. In attempting to explain the patterns of similarities and differences we discovered, we found ourselves obliged to confront the issue of what it is, exactly, that identifies these properties as a class. We propose a distinction between two such classes: the “tame” monotonicity properties (which include participation, half-way monotonicity, and strategy proofness) and the strictly broader class of “normal” monotonicity properties (which include monotonicity and one-way monotonicity, but not Maskin monotonicity). We explain why the tie-breaking agenda, t-a-t, and Gärdenfors methods are equivalent for tame monotonicities, and how, for properties that are normal but not tame, set-extension methods can fail to be equivalent to the other two (and may fail to make sense at all).

The Fermat—Weber location problem requires finding a point in
N
that minimizes the sum of weighted Euclidean distances tom given points. A one-point iterative method was first introduced by Weiszfeld in 1937 to solve this problem. Since then several research articles have been published on the method and generalizations thereof. Global convergence of Weiszfeld's algorithm was proven in a seminal paper by Kuhn in 1973. However, since them given points are singular points of the iteration functions, convergence is conditional on none of the iterates coinciding with one of the given points. In addressing this problem, Kuhn concluded that whenever them given points are not collinear, Weiszfeld's algorithm will converge to the unique optimal solution except for a denumerable set of starting points. As late as 1989, Chandrasekaran and Tamir demonstrated with counter-examples that convergence may not occur for continuous sets of starting points when the given points are contained in an affine subspace of
N
. We resolve this open question by proving that Weiszfeld's algorithm converges to the unique optimal solution for all but a denumerable set of starting points if, and only if, the convex hull of the given points is of dimensionN.

Explores, for several classes of social choice rules, the distribution of the number of profiles at which a rule can be strategically manipulated. In this paper, we will do comparative social choice, looking for information about how social choice rules compare in their vulnerability to strategic misrepresentation of preferences.

A tournament is any complete asymmetric relation over a finite set A of outcomes describing pairwise comparisons. A choice correspondence assigns to every tournament on A a subset of winners. Miller's uncovered set is an example for which we propose an axiomatic characterization. The set of Copeland winners (outcomes with maximal scores) is another example; it is a subset of the uncovered set: we note that it can be a dominated subset. A third example is derived from the sophisticated agenda algorithm; we argue that it is a better choice correspondence than the Copeland set.

This paper investigates one of the possible weakening of the (too demanding) assumptions of the Gibbard-Satterthwaite theorem. Namely we deal with a class of voting schemes where at the same time the domain of possible preference preordering of any agent is limited to single-peaked preferences, and the message that this agent sends to the central authority is simply its peak — his best preferred alternative. In this context we have shown that strategic considerations justify the central role given to the Condorcet procedure which amounts to elect the median peak: namely all strategy-proof anonymous and efficient voting schemes can be derived from the Condorcet procedure by simply adding some fixed ballots to the agent's ballots (with the only restriction that the number of fixed ballots is strictly less than the number of agents).Therefore, as long as the alternatives can be ordered along the real line with the preferences of the agents being single-peaked, it makes little sense to object against the Condorcet procedure, or one of its variants that we display in our characterization theorem.An obvious topic for further research would be to investigate reasonable restrictions of the domain of admissible preferences such that a characterization of strategy-proof voting schemes can be found. The single-peaked context is obviously the simplest one, allowing very complete characterizations. When we go on on to the two-dimensional state of alternatives the concept of single peakedness itself is not directly extended and a generalization of our one-dimensional results seems to us to be a difficult but motivating goal.

The General Fermat Problem asks for the minimum of the weighted sum of distances fromm points inn-space. Dozens of papers have been written on variants of this problem and most of them have merely reproduced known results. This note calls attention to the work of Weiszfeld in 1937, who may have been the first to propose an iterative algorithm. Although the same algorithm has been rediscovered at least three times, there seems to be no completely correct treatment of its properties in the literature. Such a treatment, including a proof of convergence, is the sole object of this note. Other aspects of the problem are given scant attention.

For solving the Euclidean distance Weber problem Weiszfeld proposed an iterative method. This method can also be applied to generalized Weber problems in Banach spaces. Examples for generalized Weber problems are: minimal surfaces with obstacles, Fermat's principle in geometrical optics and brachistochrones with obstacles.

We propose a simple Plya-variety urn model for calculating paradox-of-voting probabilities. The model contains a homogeneity parameter, and for specific values of this parameter the model reduces to cases previously discussed in the literature. We derive a Dirichlet family of distributions for describing the assignment of preference profiles in large committees, and we show how the homogeneity parameter relates to measures of similarity among voters, suggested in prior studies.

We show that a voting scheme suggested by Lewis Carroll can be impractical in that it can be computationally prohibitive (specifically, NP-hard) to determine whether any particular candidate has won an election. We also suggest a class of impracticality theorems which say that any fair voting scheme must, in the worst-case, require excessive computation to determine a winner.

Classical approachs for fitting and aggregation problems, specially in cluster analysis, social choice theory and paired comparisons methods, consist in the minimization of a remoteness function between relational data and a relational model. The notion of median, with its algebraic, metric, geometrical and statistical aspects, allow a unified treatment of many of base problems. Properties of median procedures are organized according to four directions: stabilities and axiomatic characterizations; Arrow-like properties; combinatorial properties; effective computational possibilities. Finally, interesting mathematical problems, related to the notion of median are surveyed.

We show how powerful algorithms recently developed for counting lattice points and computing volumes of convex polyhedra can be used to compute probabilities of a wide variety of events of interest in social choice theory. Several illustrative examples are given.

Smith [J.H. Smith, Aggregation of preferences with variable electorate, Econometrica 41 (1973) 1027–1041] and Young [H.P. Young, A note on preference aggregation, Econometrica 42 (1974) 1129–1131; H.P. Young, Social choice scoring functions, SIAM J. Appl. Math. 28 (1975) 824–838] characterized scoring rules via four axioms: consistency, continuity, anonymity, and neutrality. In their context a ballot consists of a strict ranking of alternatives, and an election outcome is either a set of (winning) alternatives (Young) or a weak ordering of alternatives (Smith). Many rules fail to fit this context, yet intuitively satisfy one’s notion of a generalized scoring rule; this very broad class GSR includes the Kemeny rule, approval voting, and certain grading systems. We show that GSR is identical with the class MPR of mean proximity rules loosely, rules in MPR are those for which the “average voter” determines the outcome. The techniques in the proof allow us to make some surprisingly direct comparisons between rules (for example, between Kemeny and Borda) that might initially seem to be of completely different sorts. The abstract anonymous voting rules provide the context for GSR, which is of necessity too general to admit a neutrality axiom. A natural question arises: “What happens to the Smith and Young characterizations in the absence of neutrality?” We discuss one answer in the form of a characterization of the rational mean neat voting rules (a class closely related to GSR) as those that are consistent and connected. Connectedness is a strong form of continuity that implies a discrete analogue to the Intermediate Value Theorem.

A mean proximity rule is a voting rule having a mean proximity representation in Euclidean space. Legal ballots are represented as vectors that form the representing polytope. An output plot function determines a location for each possible election output in the same space, and these locations decompose the polytope into proximity regions according to which output is closest. The election outcome is then determined by which region(s) contain the mean position of all ballots cast. Mean neat rules are obtained by relaxing the requirement that the regions be determined by proximity, insisting only that they be neatly separable by a hyperplane. If each of these hyperplanes contains a dense set of rational points (vectors with all rational components), the mean neat voting rule is said to be rational. The aim of this article is to prove that consistency and connectedness are necessary and sufficient conditions for mean neat rationality of any voting rule that is anonymous. Connectedness can be viewed as a strong form of continuity, with an intuitive content related to the Intermediate Value Theorem (or to a discrete analogue of this theorem). The proof relies on a recent result in convexity theory [D. Cervone, W.S. Zwicker, Convex decompositions, J. Convex Anal. 2008 (in press)] and suggests a conjecture: if we relax connectedness to continuity, the class so characterized is that of the mean neat voting rules. This latter class properly contains all intuitive scoring rules.

A procedure is developed to obtain representations for the probability of election outcomes with the Impartial Anonymous Culture Condition and the Maximal Culture Condition. The procedure is based upon a process of performing arithmetic with integers, while maintaining absolute precision with very large integer numbers. The procedure is then used to develop probability representations for a number of different voting outcomes, which have to date been considered to be intractable to obtain with the use of standard algebraic techniques.

This paper extends the work of Gehrlein and Fishburn (1976) and Gehrlein (1982) by providing a general theorem relating to the analytical representation of the probability of an event in a given space of profiles. It applies to any event characterized by a set of linear inequalities regardless of whether the coefficients defining the inequalities are integer or fractional coefficients. An algorithm for the probability calculation is also suggested. This suggested methodology is used to provide a complete characterization of the vulnerability properties of the four scoring rules studied in Lepelley and Mbih (1994) to manipulation by coalitions in a 3-alternative n-agent society.

All social choice functions are manipulable when more than two alternatives are available. I evaluate the manipulability of the Borda count, plurality rule, minimax set, and uncovered set. Four measures of manipulability are defined and computed stochastically for small numbers of agents and alternatives.
Social choice rules derived from the minimax and uncovered sets are found to be relatively immune to manipulation whether a sole manipulating agent has complete knowledge or absolutely no knowledge of the preferences of the others. The Borda rule is especially manipulable if the manipulating agent has complete knowledge of the others.

The voting situations at which the Borda rule or the Copeland method can be manipulated by a single voter or a coalition of voters in three-alternative elections are characterized. From these characterizations, we derive (when possible) some analytical representations measuring the vulnerability of these rules to strategic misrepresentation of preferences. Our results suggest that the Borda rule is significantly more vulnerable to strategic manipulation than the Copeland method.

Consider an election in which each of the n voters casts a vote consisting of a strict preference ranking of the three candidates A, B, and C. In the limit as nâ†’âˆž, which scoring rule maximizes, under the assumption of Impartial Anonymous Culture (uniform probability distribution over profiles), the probability that the Condorcet candidate wins the election, given that a Condorcet candidate exists? We produce an analytic solution, which is not the Borda Count. Our result agrees with recent numerical results from two independent studies, and contradicts a published result of Van Newenhizen (Economic Theory 2, 69â€“83. (1992)). Copyright Springer 2005

Median Voting Rule (MVR) has been proposed as a voting rule, based on the argument that MVR will be less manipulable than Borda Rule. We find that plurality rule has only a slightly greater probability of manipulability than MVR, and that Copeland Rule has a smaller probability of manipulability than MVR. In addition Borda Rule, plurality rule and Copeland Rule all have both a greater probability of producing a decisive result and a greater strict Condorcet efficiency than MVR. Based on all characteristics, MVR does not seem to be viable replacement for either plurality rule or for Copeland Rule. Copyright 2003 by Kluwer Academic Publishers

Variations of IAC are introduced and simulated. A uniformly distributed point P=(X1, X2,…, Xn+1) in a simplex S is generated by a map (ε1, ε2,…, εn)→P from the unit cube to S (surjective with bijective restriction to interiors) with the εi's rectangular and i.i.d on [0,1]. The fraction xyz of the electorate with preference x>y>z is a sum of Xi's. The variations allow different correlations (e.g. ρ(xyz, xzy)≠ρ(xyz, zyx) while they all are −0.2 under IAC. Simulation of two such variations give smaller Condorcet paradox propability than IAC. This is explained heuristically with a graphic “pictogram” representation of the profile.

Electrical engineers employ some methods of linear algebra, derived from Homology Theory, to decompose the flow of current in a complex circuit into two components. The same decomposition can be applied to a ‘circuit’ containing nodes representing the candidates in a multicandidate election, connected by ‘wires’ carrying flows of net voter preference.In this case, the cyclic component measures the tendency towards a voters' paradox, while the cocyclic component measures the spreads in the Borda counts. When the cocyclic component is stronger, it masks the cycles in the cyclic component, and a voters' paradox is avoided; we call this ‘Borda Dominance’.Methods based on this decomposition provide a host of necessary and sufficient conditions for various degrees of transitivity of majority preference. Sen's well-known sufficiency theorem, together with some stronger theorems, are shown to depend upon a strong ‘double’ form of the masking phenomenon. This mathematically natural generalization of Sen's key hypothesis is revealed to be equivalent to a new, quantitative form of transitivity.Because the approach provides fresh insight into the underlying source of the voters' paradox, it appears to represent a promising new tool in social choice theory, with applications beyond those in the current paper.

Algorithm AS 78: the mediancentre, Journal of the royal statistical society

- J C Gower
- J C Gower

A degree of manipulability of known social choice procedures, in Studies in Economic Theory 8, Current Trends in Economics: Theory and Applications S.Aliprantis, A.Alkan

- F Aleskerov
- E Kurbanov

F. Aleskerov and E. Kurbanov, A degree of manipulability of known social choice
procedures, in Studies in Economic Theory 8, Current Trends in Economics: Theory
and Applications S.Aliprantis, A.Alkan, N.Yannelis eds., Springer Verlag, Berlin 13-28
(1999).

The mean and median as equilibria of physical systems: a web-based simulation

- D P Cervone
- W S Zwicker

D. P. Cervone and W. S. Zwicker, The mean and median as equilibria of physical
systems: a web-based simulation, working paper (2010).

On definitions of manipulation of social choice functions, in Aggregation and Revelation of Preferences

- P Gärdenfors

P. Gärdenfors, On definitions of manipulation of social choice functions, in Aggregation
and Revelation of Preferences, J-J Laffont (Ed.), North-Holland, Amsterdam, 29-36
(1979).

Brams and P. Fishburn, Paradoxes of preferential voting

S. J. Brams and P. Fishburn, Paradoxes of preferential voting, Mathematics Magazine
56, 207-214 (1983).

Robust voting, Public choice 99

- G W Jr
- Bassett
- R Persky

Ranking sets of objects, Handbook of utility theory, volume II: Extensions

- S Barberà
- W Bossert
- P K Pattanaik