Book

Essai sur l’Application de L’Analyse à la Probabilité des Décisions Rendues à la Pluraliste des Voix. Paris

Authors:
... Researchers have long demonstrated so-called "wisdom of the crowd" effects, where the collective judgment of a group is more accurate than the judgments of individual experts or the individual group members themselves (Condorcet, 1785;Galton, 1907;Grofman, Owen, & Feld, 1983;Surowiecki, 2005). Yet recently, the impetus for crowd wisdom research has been rejuvenated as new digitally-enabled means for judgment aggregation have given rise to modern applications such as online prediction markets (Wolfers & Zitzewitz, 2004;Arrow et al., 2008), crowdsourcing (Howe, 2006), and digital democracy (Simon, Bass, Boelman, & Mulgan, 2017;Morgan, 2014). ...
... The earliest results on wisdom of the crowd effects in collective estimation tasks assumed that individuals' judgments are made independently, meaning that their errors are uncorrelated and cancel out in aggregate (Condorcet, 1785). However, this independence assumption often goes unmet in the real world because people communicate with or otherwise influence one another. ...
Conference Paper
Full-text available
Digitally-enabled means for judgment aggregation have renewed interest in "wisdom of the crowd" effects and kick-started collective intelligence design as an emerging field in the cognitive and computational sciences. A keenly debated question here is whether social influence helps or hinders collective accuracy on estimation tasks, with recently introduced network theories offering a reconciliation of seemingly contradictory past results. Yet, despite a growing body of literature linking social network structure and the accuracy of collective beliefs, strategies for exploiting network structure to harness crowd wisdom are under-explored. In this paper, we introduce a potential new tool for collective intelligence design informed by such network theories: rewiring algorithms. We provide a proof of concept through agent-based modelling and simulation , showing that rewiring algorithms that dynamically manipulate the structure of communicating social networks can increase the accuracy of collective estimations in the absence of knowledge of the ground truth.
... 2.Το θεώρημα του αδυνάτου των συναθροιστικών προτιμήσεων 2.1.Το πρόβλημα της κυκλικότητας των προτιμήσεων Η βασική ιδέα του Kenneth Arrow (1951) περί κατασκευής του θεωρήματος του αδυνάτου προέκυψε από τη συνθήκη της κοινωνικής επιλογής (σύμβαση), η οποία παραβιάζεται από το παράδοξο της ψηφοφορίας του Condorcet. Σύμφωνα με τον μαρκήσιο de Condorcet (1785), κατά τον υπολογισμό της πιθανότητας ορθών αποφάσεων που λαμβάνονται με πλειοψηφία, φαίνεται ότι οι ψηφοφόροι μιας εκλογικής διαδικασίας είναι δυνατό να προβούν σε λάθος επιλογή. Γνωστό και ως πρόβλημα της ασυνέπειας της Δημόσιας Επιλογής 1 , που οδηγεί σε λάθος εκλογικά αποτελέσματα σε βάρος των υπολοίπων ατόμων της κοινωνίας. ...
Article
Full-text available
Σύμφωνα με το θεώρημα του αδυνάτου, δεν υπάρχει και δεν μπορεί να υπάρξει μια συνάρτηση μεγιστοποίησης της κοινωνικής ευημερίας και αυτό είναι οριστικό. Η βασική ιδέα του άρθρου δεν είναι να καταρριφθεί το θεώρημα καθώς είναι μαθηματικά αδιαφιλονίκητο. Στόχος είναι να δειχθεί, ότι μεταβάλλοντας το πεδίο ορισμού του, δηλαδή το σύστημα ψηφοφορίας, τότε δεν έχει την ίδια δύναμη. Γι’ αυτό το σκοπό επιστρατεύεται το θεώρημα μη πληρότητας, το οποίο αποδυναμώνει το σύστημα ψηφοφορίας του θεωρήματος του αδυνάτου. According to general possibility theorem for social welfare functions, it does not exist and cannot be such a function that maximizes social welfare, and this is final. The main idea is not to disprove the theorem, as it is mathematically indisputable. The aim of the article is to denote that by changing the field of definition, which is the voting system, the impossibility theorem then does not have the same power. For this purpose, Gödel's incompleteness theorem is invoked, which weakens the voting system of the impossibility theorem.
... It is not possible to distinguish the distances between rankings. Condorcet (1785) proposed the simple majority rule method, whereby x i should be the winner if most of the voters prefer x i to x j . Kemeny and Snell (1962) proposed an axiomatic approach by minimizing the deviation of individual rankings based on the distance between two complete rankings. ...
Chapter
In this chapter, we address applying qualitative meta-synthesis and conducting qualitative modeling to extract knowledge from data and information during the problem-solving process. Two situations are considered. One situation refers to context with a small-scale size of available texts. Under this context, we illustrate the application of two technologies, CorMap and iView, for qualitative meta-synthesis for idea or assumption generation for further verification and validation. Another situation refers to context with open and complex systems problems, such as various societal problems highlighted and discussed in the online media. We describe qualitative modeling by outlining the complex problem-evolving process and provide two approaches for storyline generating. For both situations, visualized perspective toward the concerned problems reflected by the texts is emphasized.KeywordsMeta-synthesisProblem structuringCorMapiViewStorylineRisk mapWicked problems
... Nonetheless, previous work on the wisdom of the crowd shows that also when individuals have strong individual biases of an ideological or other nature, as long as the average individual's assessment is better than random, the aggregation of judgments produces an accurate collective assessment. This work assumes that individuals in a crowd cast independent votes 16,17 , and it suggests that while individual judgements may not be very accurate, their average often closely approximates the truth 7,[18][19][20] . Recent experimental studies further show that when individuals do not make true-or-false decisions independently but are influenced by the decisions of those who came before them -as they would be on social media -individuals' accuracy further improves [21][22][23][24][25] . ...
Article
Full-text available
Because fact-checking takes time, verdicts are usually reached after a message has gone viral and interventions can have only limited effect. A new approach recently proposed in scholarship and piloted on online platforms is to harness the wisdom of the crowd by enabling recipients of an online message to attach veracity assessments to it. The intention is to allow poor initial crowd reception to temper belief in and further spread of misinformation. We study this approach by letting 4000 subjects in 80 experimental bipartisan communities sequentially rate the veracity of informational messages. We find that in well-mixed communities, the public display of earlier veracity ratings indeed enhances the correct classification of true and false messages by subsequent users. However, crowd intelligence backfires when false information is sequentially rated in ideologically segregated communities. This happens because early raters’ ideological bias, which is aligned with a message, influences later raters’ assessments away from the truth. These results suggest that network segregation poses an important problem for community misinformation detection systems that must be accounted for in the design of such systems.
Article
Full-text available
This paper explores the relationship between two classic social evaluation procedures: the Borda count, and (an extension of) the Condorcet criterion. We provide a straightforward way of identifying and comparing those evaluation protocols, dispensing with the transitivity of individual preferences. Our approach uses individual pairwise comparisons of alternatives as informational inputs, with complete social orderings as informational outputs. We show that, keeping Arrow’s framework but weakening the property of independence of irrelevant alternatives to independence of separate pairs (the evaluation of each alternative only depends on how people compare this alternative with each other), opens the door to Borda and Condorcet evaluation functions. The key difference between these two protocols is the type of monotonicity assumed.
Article
Full-text available
This paper studies a committee’s competency in making a correct judgement. Specifically, we examine how the committee’s competency compares with the average, median, lowest, and highest competencies of individual members. We propose novel measures for these comparisons and demonstrate that the lower and upper bounds of each committee member’s competency have distinct and significant effects on the committee’s overall competency. Furthermore, our research reveals an interesting relationship between the committee’s competency and the distribution of member competencies. We find that as the number of members with competencies higher than 1212\frac{1}{2} increases, the likelihood of the committee’s competency surpassing that of individual members also increases. Conversely, when more members possess competencies lower than 1212\frac{1}{2}, the likelihood of the committee’s competency being lower than that of individual members also rises. To support this observation, we present theoretical findings from a comparison of the committee’s competency with the minimum and maximum competencies of its members.
Article
Full-text available
Does pre-voting group deliberation improve majority outcomes? To address this question, we develop a probabilistic model of opinion formation and deliberation. Two new jury theorems, one pre-deliberation and one post-deliberation, suggest that deliberation is beneficial. Successful deliberation mitigates three voting failures: (1) overcounting widespread evidence, (2) neglecting evidential inequality, and (3) neglecting evidential complementarity. Formal results and simulations confirm this. But we identify four systematic exceptions where deliberation reduces majority competence, always by increasing Failure 1. Our analysis recommends deliberation that is ‘participatory’, ‘neutral’, but not necessarily ‘equal’, i.e., that involves substantive sharing, privileges no evidences, but might privilege some persons.
Article
Full-text available
Artykuł dotyczy przemian statusu i funkcji wiedzy w nowoczesnym, zróżnicowanym funkcjonalnie społeczeństwie. W szczególności przedmiotem rozważań autora jest relacja między przemianami wiedzy – jej funkcji i charakteru – oraz tendencjami populistycznymi w polityce ostatnich lat. Autor formułuje trzy hipotezy dotyczące możliwego związku między wiedzą a wzrostem populizmu. Ich podstawą jest bardziej ogólna diagnoza współczesności zaczerpnięta z prac Niklasa Luhmanna i interpretacja pochodzącej od niego koncepcji inkluzji społecznej. Jak wnioskuje autor artykułu, era nowożytna jest areną inkluzji o bezprecedensowej skali, a proces ten zmienia formy dystrybucji wiedzy oraz jej charakter. Wedle jednej z zaprezentowanych hipotez przemiany te wpływać mogą na samorozumienie obywateli i komunikację polityczną.
Article
Full-text available
Work on AI ethics often calls for AI systems to employ social choice ethics, in which the values of the AI are matched to the aggregate values of society. Such work includes the concepts of bottom-up ethics, coherent extrapolated volition, human compatibility, and value alignment. This paper describes a major challenge that has previously gone overlooked: the potential for aggregate societal values to be manipulated in ways that bias the values held by the AI systems. The paper uses a “red teaming” approach to identify the various ways in which AI social choice systems can be manipulated. Potential manipulations include redefining which individuals count as members of society, altering the values that individuals hold, and changing how individual values are aggregated into an overall social choice. Experience from human society, especially democratic government, shows that manipulations often occur, such as in voter suppression, disinformation, gerrymandering, sham elections, and various forms of genocide. Similar manipulations could also affect AI social choice systems, as could other means such as adversarial input and the social engineering of AI system designers. In some cases, AI social choice manipulation could have catastrophic results. The design and governance of AI social choice systems needs a separate ethical standard to address manipulations, including to distinguish between good and bad manipulations; such a standard affects the nature of aggregate societal values and therefore cannot be derived from aggregate societal values. Alternatively, designers of AI systems could use a non-social choice ethical framework.
Article
Full-text available
Democracy is upheld through the principle of majority rule. To validate the application of democracy, it is imperative to assess the sincerity of voter decisions. When voter sincerity is compromised, manipulation may occur, thereby undermining the legitimacy of democratic processes. This paper presents a general version of a symmetric dichotomous choice model. Using simple majority rule, we show that when a voter receives one or more private signals, sincere voting is an equilibrium behavior. A slight change to this basic model may create an incentive to vote insincerely. We show that even in a more restricted model where every voter receives only one private signal whose level of precision is the same for all the voters but depends on the state of nature, voters may have an incentive to vote insincerely.
Article
Full-text available
There are numerous proposals for Group Decision-Making (GDM) inspired by the ELECTRE multiple criteria decision approach. These proposals capitalize on ELECTRE's resemblance to certain voting systems and its ability to navigate veto situations. However, while ELECTRE-based methods have commendable features for establishing the credibility degree of the predicate “x is collectively considered at least as good as y”, they do not address three relevant issues: (1) the reinforced preference in favor of x exhibited by certain members of the group; (2) the strength of the coalition of Decision-Makers (DMs) who favor y over x; and (3) the effects of preference dependence (complementarity, redundancy, antagonism) among different DMs. This paper addresses group ranking problems within scenarios where a group is under the control of a special powerful actor, called a “Supra-Decision Maker”, or when a group adheres to a predetermined system of rules agreed upon by its members. Unlike other ELECTRE-based methods for GDM, this proposal comprehensively addresses the issues (1), (2) and (3) to determine the credibility degree of the collective outranking predicate. This determination can be utilized to derive a collective ranking or another form of recommendation in GDM. This proposal is expected to excel in a collaborative organizational environment where group members express genuine judgments, devoid of malicious intentions to manipulate collective decisions. Moreover, it has relevance in socially oriented decision-making contexts, especially when government agencies seek to reconcile opinions of diverse stakeholder groups with highly contradictory points of view. In such scenarios, where phenomena such as preference dependence, reinforced preference, and intense disagreement manifest, this proposal could offer valuable insights.
Chapter
It was in the offices of administrative statistics that data work was held in the nineteenth century, where the most massive amount of numerical information was processed. Reconstructing the know-how of this era leads to the formation of the theory of the average and its international dissemination. Indeed, it not only governed the calculations but also the organization of these offices. Once sealed in the work of the astronomer Adolphe Quetelet and promoted internationally, mainly in Europe, this theory of the average became the key to the statistical know-how of that century. It went hand in hand with presuppositions on the dispersion of observations today considered as obsolete. We can therefore put an end to the legend of the universality of the law of large numbers.
Article
Full-text available
The United Nations’ Human Development Index remains a widely used and accepted measure of human development. Although it has been revised over the years to address various critiques, a remaining concern is the way the three dimensions are aggregated into the single index. A deterioration in one dimension can be compensated for by an improvement in another. Since compensability is inextricably linked with trade-offs and intensity of preferences, a non-compensatory (i.e., Condorcet) approach to aggregation is employed in this paper. Although non-compensatory approaches have been employed previously, this paper adds to the literature by undertaking an application of the Condorcet approach to the entire HDI. This approach, which does not use intensities of preferences, ensures that the degree of compensability connected with the aggregation model is at the minimum possible level. To achieve this, country level rankings are then compared to those for the 2020 Human Development Index which aggregates dimensions using a geometric mean. The findings demonstrated substantial changes in rank-order between the HDI and Condorcet approach. This outcome provides empirical evidence which demonstrates that the non-compensatory Condorcet approach can mitigate issues of compensation present within the geometric aggregation technique currently employed by the HDI. These findings have potential implications in aiding the identification and employment of potential policy priorities—specifically, the notion that policy should emphasise the development of a country as opposed to economic growth alone.
Article
Mediation analysis investigates the covariation of variables in a population of interest. In contrast, the resolution level of psychological theory, at its core, aims to reach all the way to the behaviors, mental processes, and relationships of individual persons. It would be a logical error to presume that the population-level pattern of behavior revealed by a mediation analysis directly describes all, or even many, individual members of the population. Instead, to reconcile collective covariation with theoretical claims about individual behavior, one needs to look beyond abstract aggregate trends. Taking data quality as a given and a mediation model’s estimated parameters as accurate population-level depictions, what can one say about the number of people properly described by the linkages in that mediation analysis? How many individuals are exceptions to that pattern or pathway? How can we bridge the gap between psychological theory and analytic method? We provide a simple framework for understanding how many people actually align with the pattern of relationships revealed by a population-level mediation. Additionally, for those individuals who are exceptions to that pattern, we tabulate how many people mismatch which features of the mediation pattern. Consistent with the person-oriented research paradigm, understanding the distribution of alignment and mismatches goes beyond the realm of traditional variable-level mediation analysis. Yet, such a tabulation is key to designing potential interventions. It provides the basis for predicting how many people stand to either benefit from, or be disadvantaged by, which type of intervention.
Chapter
Just as the market can successfully solve the resource allocation problem using the idea of information reduction, we consider how democracy can have the same information-reduction function as the market in solving problems in the political realm. From an information point of view, we revisit several theories and practices in the study of social choice; the axiomatization of the Walras rule using local independence, the Condorcet jury theorem, majority voting with single-peaked preferences, reducing the number of agendas in Arrow’s social welfare functions, and the practice of implementing school choice, an attempt at market design. This clarifies the importance of reducing information in the democratic debate.
Article
Full-text available
Descriptions of types of intelligence or cognition that conceptualize and categorize behavioral capabilities of workers and cooperative groups of eusocial insects have proliferated. Individual workers are described as having cognition, or less frequently, intelligence, and emergent colony-level behavior is typically described as collective intelligence, swarm intelligence, and distributed intelligence (or cognition). These concepts and terms have historical roots in psychology, education, economics, politics, computer science, artificial intelligence, and robotics, and have varied connotations and denotations that often are inconsistent with their initial context of use. Although integration and hybridization among disciplines can be productive, imprecise and potentially misleading applications may limit the ability to accurately describe or conceptualize social insect behavioral phenomena, generate testable hypotheses, and communicate accurately and broadly within the scientific community and with the media and public. Here, we aim to clarify the origins, meanings, and relevance of terms associated with social insect intelligence and cognition. An historical, semantic, and mechanistic analysis suggests that terms may lack relevant conceptual significance and should be carefully evaluated before applying them free-hand to attempt to inform our understanding of social insect cognition at multiple levels. We provide rationale and recommendations for retaining or discontinuing the use of terms.
Article
Full-text available
The outcome of collective decision-making often relies on the procedure through which the perspectives of its members are aggregated. Popular aggregation methods, such as the majority rule, often fail to produce the optimal result, especially in high-complexity tasks. Methods that rely on meta-cognitive information, such as confidence-based methods and the Surprisingly Popular answer, have succeeded in various tasks. However, there are still scenarios that result in choosing the incorrect answer. We aim to exploit meta-cognitive information and learn from it, to enhance the group’s ability to produce a correct answer. Specifically, we propose two different feature-representation approaches: Response-Centered feature Representation (RCR), which focuses on the characteristics of the individual response, and Answer-Centered feature Representation (ACR), which focuses on the characteristics of each of the potential answers. Using these two feature-representation approaches, we train machine-learning models to predict the correctness of a response and an answer. The trained models are used in our two proposed aggregation approaches: (1) The Response-Prediction (RP) approach aggregates the results of the group’s votes by exploiting the RCR feature-engineering approach; (2) The Answer-Prediction (AP) approach aggregates the results of the group’s votes by exploiting the ACR feature-engineering approach. To evaluate our methodology, we collected 2514 responses for different tasks. The results show a significant increase in the success rate compared to standard rule-based aggregation methods.
Chapter
We study deviations by a group of agents in the three main types of matching markets: the house allocation, the marriage, and the roommates models. For a given instance, we call a matching k-stable if no other matching exists that is more beneficial to at least k out of the n agents. The concept generalizes the recently studied majority stability (Thakur, 2021). We prove that whereas the verification of k-stability for a given matching is polynomial-time solvable in all three models, the complexity of deciding whether a k-stable matching exists depends on kn\frac{k}{n} and is characteristic to each model.KeywordsMajority stabilitystable matchingpopular matchingcomplexity
Book
Full-text available
As boas práticas de gestão podem ser definidas, no contexto judicial, como as “[…] inovações organizacionais em tribunais com capacidade de impulsionar o desempenho dessas instituições e, consequentemente, o atendimento de demandas por justiça e eficiência”. Assim, a inovação na gestão “[…] pode impulsionar a boa governança dessas organizações, resultando em melhorias da prestação judicial à sociedade”. Desse modo, identificar, mapear e analisar as boas práticas de gestão nos diferentes tribunais e também em outros sistemas de justiça é importante para difundí-las e, ao mesmo tempo, criar novos paradigmas de governança. Essas inovações permitem elevar o nível do que se entende por uma boa gestão e por eficiência, de modo a induzir os tribunais a buscarem um aperfeiçoamento contínuo. Nesse sentido, o presente livro tem o objetivo de identificar, analisar e refletir sobre boas práticas de gestão nos sistemas de justiça brasileiro e português, a partir da visão daqueles que se dedicam a buscar diuturnamente melhorar a administração da justiça e a prestação jurisdicional. A partir disso, espera-se que haja impactos tanto para as instituições judiciais como para o campo do conhecimento relativo à gestão judicial. Assim, o livro pretende analisar boas práticas nos sistemas de justiça português e brasileiro, a fim de que elas possam ser conhecidas, refletidas e, com a devida contextualização e adaptação, adotadas pelos diversos tribunais, sempre que puderem melhorar o desempenho judicial, sobretudo em termos de qualidade, celeridade, transparência, accountability e otimização, em prol da efetividade da justiça, com foco na proteção de direitos e no ser humano.
Article
Full-text available
This paper proposes and characterizes a method to solve multicriteria evaluation problems when individual judgements are categorical and may fail to satisfy both transitivity and completeness. The evaluation function consists of a weighted sum of the average number of times that each alternative precedes some other, in all pairwise comparisons. It provides, therefore, a quantitative assessment which is well-grounded, immediate to compute, and easy to understand.
Article
Full-text available
One of the most widespread multi-criteria decision-making methods is the Analytic Hierarchy Process (AHP). AHP successfully combines the pairwise comparisons method and the hierarchical approach. It allows the decision-maker to set priorities for all ranked alternatives. But what if, for some of them, their ranking value is known (e.g., it can be determined differently)? The Heuristic Rating Estimation (HRE) method proposed in 2014 tried to bring the answer to this question. However, the considerations were limited to a model only considering a few criteria. This work analyzes how HRE can be used as part of the AHP hierarchical framework. The theoretical considerations are accompanied by illustrative examples showing HRE as a multiple-criteria decision-making method.
Chapter
Group decision-making (GDM) or collaborative decision-making pursuits the “wisdom of the crowd” through deciding by more than one decision-maker (DM). From family outings to the presidential elections, GDM is all around. To help DMs make more wiser choices, many different GDM methods have been or will be developed, from social choice function to GDM methods based on behaviors. The review of GDM methods and applications can help DMs quickly choose the suitable one from numerous methods and applications and can promote the development of GDM theory. This chapter does a review on GDM and discusses the future studies on GDM.KeywordsGroup decision-makingWisdom of the crowdInformation fusionConsensus improvingBehavior theories
Article
Full-text available
Emergent behavior in repeated collective decisions of minimally intelligent agents—who at each step in time invoke majority rule to choose between a status quo and a random challenge—can manifest through the long-term stationary probability distributions of a Markov chain. We use this known technique to compare two kinds of voting agendas: a zero-intelligence agenda that chooses the challenger uniformly at random and a minimally intelligent agenda that chooses the challenger from the union of the status quo and the set of winning challengers. We use Google Co-Lab’s GPU accelerated computing environment to compute stationary distributions for some simple examples from spatial-voting and budget-allocation scenarios. We find that the voting model using the zero-intelligence agenda converges more slowly, but in some cases to better outcomes.
Article
The basic idea of voting protocols is that nodes query a sample of other nodes and adjust their own opinion throughout several rounds based on the proportion of the sampled opinions. In the classic model, it is assumed that all nodes have the same weight. We study voting protocols for heterogeneous weights with respect to fairness. A voting protocol is fair if the influence on the eventual outcome of a given participant is linear in its weight. Previous work used sampling with replacement to construct a fair voting scheme. However, it was shown that using greedy sampling, i.e., sampling with replacement until a given number of distinct elements is chosen, turns out to be more robust and performant. In this paper, we study fairness of voting protocols with greedy sampling and propose a voting scheme that is asymptotically fair for a broad class of weight distributions. We complement our theoretical findings with numerical results and present several open questions and conjectures.
Chapter
A candidate is said to be socially acceptable if the number of voters who rank her among the most preferred half of the candidates is at least as large as the number of voters who rank her among the least preferred half (Mahajne & Volji in Soc Choice Welfare 51:223–233, 2018). For every voting profile, there always exists at least one socially acceptable candidate. This candidate may not be elected by some well-known voting rules. In some cases, the voting rules may even lead to the election of a socially unacceptable candidate, that is a candidate such that the number of voters who rank her among the most preferred half of the candidates is strictly less than the number of voters who rank her among the least preferred half. In this paper, our contribution is twofold. First, since the existence of socially unacceptable candidates is not always guaranteed, we determine the probabilities that such candidates exist given the number of the running candidates and the size of the electorate. Second, we evaluate how often the Plurality rule, the Negative Plurality rule, the Borda rule and their two-round versions can elect a socially unacceptable candidate. We perform our simulations under both the Impartial Culture and the Impartial Anonymous Culture, two assumptions which are widely used when studying the likelihood of voting events. Our results show that as the number of candidates increases, it becomes almost assured to have at least one socially unacceptable candidate; in some cases, the probability that half of the candidates in the running are socially unacceptable approaches or even exceeds 50%50\%. It also turns out that the extent to which a socially unacceptable candidate is selected depends strongly on the voting rule, the underlying distribution of voters’ preferences, the number of voters and the number of competing candidates.
Chapter
We discuss some methods aiming to reconcile Borda’s and Condorcet’s winning intuitions in the theory of voting. We begin with a brief summary of the advantages and disadvantages of binary and positional voting rules. We then review in some detail Black’s, Nanson’s and Dodgson’s rules as well as the relatively recently introduced methods based on supercovering relation over the candidate set. These are evaluated in terms of some well-known choice–theoretic criteria.
Article
Full-text available
We develop a conceptual framework for studying collective adaptation in complex socio-cognitive systems, driven by dynamic interactions of social integration strategies, social environments and problem structures. Going beyond searching for 'intelligent' collectives, we integrate research from different disciplines and outline modelling approaches that can be used to begin answering questions such as why collectives sometimes fail to reach seemingly obvious solutions, how they change their strategies and network structures in response to different problems and how we can anticipate and perhaps change future harmful societal trajectories. We discuss the importance of considering path dependence, lack of optimization and collective myopia to understand the sometimes counterintuitive outcomes of collective adaptation. We call for a transdisciplinary, quantitative and societally useful social science that can help us to understand our rapidly changing and ever more complex societies, avoid collective disasters and reach the full potential of our ability to organize in adaptive collectives.
Chapter
This chapter explores how previous theory has treated the issue of legislative chaos. After describing Arrow’s (1951, Social choice and individual values. Wiley)? findings that chaos is likely in almost all majority voting situations, I overview three approaches academics have proposed for “solving” legislative chaos. One approach, called preference-induced equilibriums (PIE), proposed constraining legislator preferences to a single dimension, thus creating an equilibrium at the preferences of the median legislator. Academics defended this assumption of unidimensional preferences because it seemed acceptable among legislators who have strong, unidimensionalizing, ideologies (Converse (1964) The nature of belief systems in mass publics. In: Ideology and discontent; Noel (2012) Political ideologies and political parties in America). Another approach, called structure-induced equilibriums (SIE), argued that even if legislators have multidimensional preferences, universal domain can be sacrificed through the use of legislative institutions in order to create stability (Shepsle and Weingast (1981) Pub Choice 37(3):503–519). Finally, the third approach to “solve” legislative chaos is to sacrifice the nondictatorship assumption and simply establish a dictatorship. This is not often proposed by political scientists, but it may be possible that a strong executive—that falls short of a dictator—can stabilize legislative behavior (Cox and Morgenstern (2001) Comp Pol 33(2):171–189). In the rest of Part 1, I explore if PIEs, SIEs, or dictatorships create stability in Paraguay.
Chapter
This chapter reviews the history of how political parties were formed in Paraguay in 1887—after the Triple Alliance War where Paraguay fought against Brazil, Argentina, and Uruguay in 1870. While this chapter provides many qualitative details about how parties developed, the main conclusion of this chapter is that the formation of parties in Paraguay most closely resemble what the UCLA School proposes: interest groups pursuing their own benefits. This can be seen especially by the cycling coalitions which precede the formation of parties and which eventually stabilized into long-term political parties. However, I also conclude that the UCLA School model of party formation is a bit incomplete because it underestimates the extent to which chaos persists if parties are solely composed of interest groups pursuing their own benefits. Therefore, while the UCLA School model is the closest to accounting for party formation in Paraguay, this model still requires some modifications to determine how parties maintain themselves despite the chaos and instability that intra-party factionalism promotes.
Chapter
This chapter briefly presents some of the mathematical and geospatial tools (for geospatial analysis, see, e.g., DeSmith et al., Geospatial Analysis. A Comprehensive Guide to Principles, Techniques and Software Tools, 6th edn, 2018) that are important in the context of multicriteria location decisions (see Malczewski, GIS and Multicriteria Decision Analysis, Wiley, New York, 1999). The first chapter discusses interpolation and curve fitting techniques. These methods are important in case measurements of some attribute have been taken at sites (so-called observation points), whereas the values of these attributes are needed at other points. For instance, seismic tests have been taken at some discrete points that provide a profile of different subsoil strata. Information regarding the thickness of, say, coal seams are needed in areas in which the Federal Government offers land leases for the exploitation of coal. Given that the observation points are reasonably close and there are no major geological faults in between, interpolation of known data can provide important clues regarding the feasibility of exploiting the natural resource.
Chapter
Two volumes, published roughly 50 years apart and both dealing with the probabilistic analysis of collective decision-making, are reviewed with the aim of tracing the developments in the field that stands somewhat outside the mainstream social choice and voting theory. It turns out that the core topics have remained the same, but with the passage of time the issues addressed have become more nuanced and the analysis techniques more advanced and variegated. Some topics dealt with in the earlier volume have been left behind and replaced by others in the later one. Originated largely in the U.S., the probabilistic tradition has now gained a firm foothold in several European research centers with many important topics analyzed by cross-Atlantic teams. At the same time, new approaches stemming from computer science, geometry, and other parts of mathematics have opened new vistas to the analysis of voting procedures.
Article
Full-text available
Ranking aggregation, studied in the field of social choice theory, focuses on the combination of information with the aim of determining a winning ranking among some alternatives when the preferences of the voters are expressed by ordering the possible alternatives from most to least preferred. One of the most famous ranking aggregation methods can be traced back to 1959, when Kemeny introduces a measure of distance between a ranking and the opinion of the voters gathered in a profile of rankings. Using this, he proposed to elect as winning ranking of the election the one that minimizes the distance to the profile. This is factorial on the number of alternatives, posing a handicap in the runtime of the algorithms developed to find the winning ranking, which prevents its use in real problems where the number of alternatives is large. In this work we introduce the first algorithm for the Kemeny problem designed to be executed in a Graphical Processing Unit. The threads identifiers are codified to be associated with rankings by means of the factorial number system, a radix numeral system that is then used to uniquely pair a ranking with the thread using Lehmer’s code. Results guarantee constant execution time up to 14 alternatives.
Chapter
In this chapter, I carry out the following synthesis. It is about answering the following question: to what extent can these three controversies be explained from the analysis made by Foucault? After having identified the explanatory limits of the concept of episteme, I propose to substitute the concept of episteme for the concept of order.
Article
Full-text available
The majority judgment (MJ) voting method works well in theory and in practice. Not only does MJ avoid the classical Condorcet and Arrow paradoxes, but it also overcomes the domination paradox, from which paired comparisons by majority rule, approval voting, and all Condorcet consistent methods suffer. This article also shows why MJ best reduces the impact of strategic manipulation and minimizes ties to the extreme. The article illustrates the resistance of MJ to manipulations in a real example, discusses other salient properties of MJ, and summarizes several recent applications that show MJ to be, despite its newness, the right basis of electoral reform.
Chapter
Representation belongs to the history of republics, yet is missing in contemporary theories of republicanism. According to the prevailing narrative, representation is not part of the democratic tradition, and its emergence in modern politics coincided with the neutralization of the people in the decision-making process. This story has veiled the existence of a lesser known history, of a democratic republicanism that sought to contain the absolute power of the crowd without resorting to a liberal-elitist form of representative government. This chapter goes back to that history. It analyzes the contributions of Thomas Paine and the Marquis de Condorcet to the merging of democracy and republicanism. They wanted to make representative government democratic by overcoming the polarization between representation and participation, and instead making them related forms of political action along the continuum of decision-making and opinion formation.KeywordsDemocratic republicanismRepresentationDemocratic policymakingPaineCondorcet
Article
Full-text available
We consider the multicriteria ranking problem, and specifically a ranking procedure based on reference points recently proposed in the literature, named Ranking with Multiple reference Points (RMP). Implementing RMP in a real world decision problem requires to elicit the model preference parameters. This can be done indirectly by inferring the parameters from stated preferences. Learning an RMP model from stated preferences proves however to be computationally costly, and can hardly be put in practice using currently available algorithms. In this paper, we propose a Boolean satisfiability formulation for inferring an RMP model from a set of pairwise comparisons which is much faster than the existing algorithms.
Book
Full-text available
Models of Society and Complex Systems introduces readers to a variety of different mathematical tools used for modelling human behaviour and interactions, and the complex social dynamics that drive institutions, conflict, and coordination. What laws govern human affairs? How can we make sense of the complexity of societies and how do individual actions, characteristics, and beliefs interact? Social systems follow regularities which allow us to answer these questions using different mathematical approaches.This book emphasises both theory and application. It systematically introduces mathematical approaches, such as evolutionary and spatial game theory, social network analysis, agent-based modelling, and chaos theory. It provides readers with the necessary theoretical background of each toolset as well as the underlying intuition, while each chapter includes exercises and applications to real-world phenomena. By looking behind the surface of various social occurrences, the reader uncovers the reasons why social systems exhibit both cultural universals and at the same time a diversity of practices and norms to a degree that even surpasses biological variety, or why some riots turn into revolutions while others do not even make it into the news. This book is written for any scholar in the social sciences interested in studying and under-standing human behaviour, social dynamics, and the complex systems of society. It does not expect readers to have a particular background apart from some elementary knowledge and affinity for mathematics.
Chapter
Full-text available
Artificial Swarm Intelligence (ASI) is a powerful method for amplifying the collective intelligence of decentralized human teams and quickly improving their decision-making accuracy. Previous studies have shown that ASI tools, such as the Swarm® software platform, can significantly amplify the collaborative accuracy of decentralized groups across a wide range of tasks from forecasting and prioritization to estimation and evaluation. In this paper, we introduce a new ASI method for amplifying group intelligence called the “slider-swarm” and show that networked human groups using this method were 11% more accurate in generating collaborative forecasts as compared to traditional polling-based Wisdom of Crowds (WoC) aggregation methods (p < 0.001). Finally, we show that groups using slider-swarm on three real-world forecasting tasks, including forecasting the winners of the 2022 Academy Awards, produce collective forecasts that are 11% more accurate than a WoC aggregation. These results suggest slider-swarms amplify group forecasting accuracy across a range of real-world forecasting applications.
Article
Full-text available
Se examinan cuatro aproximaciones sobre los procesos y limitaciones de los marcados, y los de no mercado: Smith, Arrow, Becker y Roth. Smith centra la atención en el sentimiento moral de la simpatía, Arrow pone el énfasis en las asimetrías lógicas que se presentan entre la elección individual y la colectiva, Becker busca ampliar el sistema de precios a través de los incentivos, y Roth destaca la relevancia de la sicología y de la conducta humana.
Article
In the last few years, breakthroughs in computational and experimental techniques have produced several key discoveries in the science of networks and human collective intelligence. This review presents the latest scientific findings from two key fields of research: collective problem-solving and the wisdom of the crowd. I demonstrate the core theoretical tensions separating these research traditions and show how recent findings offer a new synthesis for understanding how network dynamics alter collective intelligence, both positively and negatively. I conclude by highlighting current theoretical problems at the forefront of research on networked collective intelligence, as well as vital public policy challenges that require new research efforts.
Chapter
Full-text available
We introduce spherical fuzzy neutrosophic cubic graph and single-valued neutrosophic spherical cubic graphs in bipolar setting and discuss some of their properties such as Cartesian product, composition, m-join, n-join, m-union, n-union. We also present a numerical example of the defined model which depicts the advantage of the same.Finally, we define a score function and minimum spanning tree algorithm of an undirected bipolar single-valued neutrosophic spherical cubic graph with a numerical example.
Book
Full-text available
Suppose that you prefer A to B, B to C, and C to A. Your preferences violate Expected Utility Theory by being cyclic. Money-pump arguments offer a way to show that such violations are irrational. Suppose that you start with A. Then you should be willing to trade A for C and then C for B. But then, once you have B, you are offered a trade back to A for a small cost. Since you prefer A to B, you pay the small sum to trade from B to A. But now you have been turned into a money pump. You are back to the alternative you started with but with less money. This Element shows how each of the axioms of Expected Utility Theory can be defended by money-pump arguments of this kind. This title is also available as Open Access on Cambridge Core.
Article
Résumé Dans cet article, nous étudions la forte stabilité d'une règle de vote telle que définie par Dutta et al. [2001] par l'intermédiaire de la méthode expérimentale. Dans ce sens, une règle de vote est dite fortement stable si le vainqueur de l'élection reste inchangé après une tentative de manipulation par candidature stratégique d'un candidat potentiel. Dans le cadre d'une élection avec trois candidats en lice et d'un électorat de petite taille, nous évaluons de façon expérimentale les fréquences de la forte stabilité des règles de vote parlementaires et de la pluralité.
Article
How large should a monetary policy committee be? Which voting rule should a monetary policy committee adopt? This paper builds on Condorcet's jury theorem to analyse the relationships between committee size and voting rules in a model where policy discussions are subject to a time constraint. It suggests that in large committees majority voting is likely to enhance policy outcomes. Under unanimity (consensus) it is preferable to limit the size of the committee. Finally, supermajority voting rules are social contrivances that contribute to policy performance in a more uncertain environment, when initial policy proposals are less likely to be correct, or when payoffs are asymmetric.
ResearchGate has not been able to resolve any references for this publication.