## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

The Bradley–Terry model (BT) is commonly used for evaluation of choice preferences by paired comparison data in various areas of applied psychology, advertising, and marketing research. The estimation of BT parameters of preference is usually achieved in an iterative procedure based on the maximum likelihood approach. In this paper an easier way of finding these parameters via an eigenproblem is considered. This approach corresponds to solving a Chapman–Kolmogorov system of equations to estimate the steady-state probabilities of the compared items. Both techniques produce very similar results, but the eigenvector solution is simpler for applications and suggests an interpretation of BT preferences as the choice probabilities. The suggested approach can facilitate the paired comparison estimations and be utilized in various practical aims of managerial decision making.

To read the full-text of this research,

you can request a copy directly from the author.

... Thurstone scaling is a method of priority evaluation among items by the frequency of their empirical pairwise preferences (Thurstone 1927(Thurstone , 1959Thurstone & Jones, 1957). This technique is widely used in fields of applied psychology, particularly, in marketing and advertising research (Edwards, 1957;Torgerson, 1958;Bock & Jones, 1968;Green & Tull, 1978;Conklin & Lipovetsky, 1999, 2004a, 2004bLipovetsky, 2007aLipovetsky, , 2007b. Thurstone scaling transforms ranked or paired comparison data into a scale that is used for displaying the results of a ranking procedure. ...

... Thurstone scaling is a method of priority evaluation among items by the frequency of their empirical pairwise preferences (Thurstone 1927(Thurstone , 1959Thurstone & Jones, 1957). This technique is widely used in fields of applied psychology, particularly, in marketing and advertising research (Edwards, 1957;Torgerson, 1958;Bock & Jones, 1968;Green & Tull, 1978;Conklin & Lipovetsky, 1999, 2004a, 2004bLipovetsky, 2007aLipovetsky, , 2007b. Thurstone scaling transforms ranked or paired comparison data into a scale that is used for displaying the results of a ranking procedure. ...

... The Thurstone model defines a scale of differences; standardizing to zero-one range corresponds to the interval scale. Together with the TMD model, the Bradley-Terry-Luce (BTL) model is also considered for pair comparison (Bradley & Terry, 1952;Luce (1959); Luce & Suppes, 1965;Lipovetsky, 2008) that corresponds to applying the logistic as opposed to the normal probability function. ...

Thurstone scaling is widely used in marketing and advertising research where various methods of applied psychology are utilized. This article considers several analytical tools usefulfor positioning a set of items on a Thurstone scale via regression modeling and Markov stochastic processing in the form of Chapman- Kolmogorov equations. These approaches produce intervaland ratio scales of preferences and enrich the possibilities of paired comparison estimation applied for solving practical problems of prioritization and probability of choice modeling.

... Discrete choice modeling (DCM) is nowadays one of the main tools for estimating preferences among multiple alternatives widely used in applied economics and psychology, transportation and management, social and marketing research. Estimation of the utility parameters and choice probabilitiesisusuallyperformedviathemultinomial-logit(MNL)modelingoriginatedbyMcFadden (1973,1981)andfurtherdevelopedinnumerousworks (McFaddenandRichter,1990;Louviere, Hensher,&Swait,2000;Train,2003;Orme,2010).Oneofthemostpopulartechniquesbasedon DCM is the Best-Worst Scaling (BWS), also called Maximum Difference (MaxDiff) which is a modern marketing research approach to evaluation probability of choice among many compared items.ThismethodhasbeenproposedbyLouviere (1991,1993),anddevelopedinvariousworks (MarleyandLouviere,2005;Bacon,Lenk,Seryakova,&Veccia,2007,2008.BWScanbeseenas extensionofscalingbyThurstoneandBradley-Terrymodelsfrompairedcomparisons (Thurstone, 1927;Bradley and Terry, 1952;David, 1988;Lipovetsky and Conklin, 2004;Lipovetsky, 2008) tosimultaneouscomparisonsamongthreeandmoreitemsinabalancedplanwhereeachitemis representedapproximatelythesamenumberoftimesacrossthesample,andtherespondentsindicate whichitemsarethebestandworst,withfollowingestimationofchoiceprobabilitiesinMNLbyvarious availablesoftwareoranalytically (Louviere,Flynn,&Marley,2015;Marley,Flynn,&Louviere, 2008;Marley,Islam,&Hawkins,2016;LipovetskyandConklin,2014aLipovetskyandConklin, ,2014bLipovetsky,2018). ...

... Discrete choice modeling (DCM) is nowadays one of the main tools for estimating preferences among multiple alternatives widely used in applied economics and psychology, transportation and management, social and marketing research. Estimation of the utility parameters and choice probabilitiesisusuallyperformedviathemultinomial-logit(MNL)modelingoriginatedbyMcFadden (1973,1981)andfurtherdevelopedinnumerousworks (McFaddenandRichter,1990;Louviere, Hensher,&Swait,2000;Train,2003;Orme,2010).Oneofthemostpopulartechniquesbasedon DCM is the Best-Worst Scaling (BWS), also called Maximum Difference (MaxDiff) which is a modern marketing research approach to evaluation probability of choice among many compared items.ThismethodhasbeenproposedbyLouviere (1991,1993),anddevelopedinvariousworks (MarleyandLouviere,2005;Bacon,Lenk,Seryakova,&Veccia,2007,2008.BWScanbeseenas extensionofscalingbyThurstoneandBradley-Terrymodelsfrompairedcomparisons (Thurstone, 1927;Bradley and Terry, 1952;David, 1988;Lipovetsky and Conklin, 2004;Lipovetsky, 2008) tosimultaneouscomparisonsamongthreeandmoreitemsinabalancedplanwhereeachitemis representedapproximatelythesamenumberoftimesacrossthesample,andtherespondentsindicate whichitemsarethebestandworst,withfollowingestimationofchoiceprobabilitiesinMNLbyvarious availablesoftwareoranalytically (Louviere,Flynn,&Marley,2015;Marley,Flynn,&Louviere, 2008;Marley,Islam,&Hawkins,2016;LipovetskyandConklin,2014aLipovetskyandConklin, ,2014bLipovetsky,2018). ...

Discrete choice modeling is one of the main tools of estimation utilities and preference probabilities among multiple alternatives in economics, psychology, social sciences, and marketing research. One of popular DCM tools is the Best-Worst Scaling, also known as Maximum Difference. Data for such modeling is given by respondents presented with several items, and each respondent chooses the best alternative. Estimation of utilities is usually performed in a multinomial-logit modeling which produces utilities and choice probabilities. This article describes how to obtain probability estimation adjusted to possible absence of items in actual purchasing. We apply Markov chain modeling in the form of Chapman-Kolmogorov equations and its steady-state solution for stochastic matrix can be obtained analytically. An adjustment to choice probability with network effects is also considered. Numerical example by marketing research data is used.

... Maximum Difference scaling, or MaxDiff, also known as best-worst scaling, is a contemporary method for the prioritization of items proposed by Jordan Louviere [1991,1993], and developed and applied in numerous works [for a few examples, see Cohen and Orme (2004); Marley and Louviere (2005); Orme (2003Orme ( , 2009]. MaxDiff is based on scaling methods known in Thurstone, Bradley-Terry and other paired comparison models [for instance, see Thurstone (1927Thurstone ( , 1959; Bradley and Terry (1952); Green and Tull (1978); David (1988); Conklin and Lipovetsky (1999a); Lipovetsky (2007Lipovetsky ( , 2008], and also on discrete choice modeling (DCM) which permits the simultaneous presentation of three, four or more items to the respondents and estimation of the utility parameters and choice probabilities using multinomiallogit (MNL) modeling [McFadden (1973); Ben-Akiva and Lerman (1985); McFadden and Richter (1990); Conklin and Lipovetsky (1999b); Louviere et al. (2000); Train (2003); Lipovetsky (2011Lipovetsky ( , 2014Lipovetsky ( , 2015]. ...

... These formulae are very convenient for numerical and analytical consideration of the choice probabilities, for instance they can be easily used in bootstrap estimations of the item choice. MaxDiff data can also be used for priority evaluation in other methods: In Thurstone scaling, SVD, Bradley-Terry estimation in ML and Markov stochastic modeling in Chapman-Kolmogorov equations for finding steady-state probabilities [Conklin and Lipovetsky (1999a); Conklin (2003, 2004); Lipovetsky (2007Lipovetsky ( , 2008]. ...

Maximum Difference (MaxDiff) is a discrete choice modeling approach widely used in marketing research for finding utilities and preference probabilities among multiple alternatives. It can be seen as an extension of the paired comparison in Thurstone and Bradley-Terry techniques for the simultaneous presenting of three, four, or more items to respondents. A respondent identifies the best and the worst ones, so the remaining are deemed intermediate by preference alternatives. Estimation of individual utilities is usually performed in a Hierarchical Bayesian (HB) multinomial-logit (MNL) modeling. MNL can be reduced to a logit model by the data composed of two specially constructed design matrices of the prevalence from the best and the worst sides. The composed data can be of a large size which makes logistic modeling less precise and very consuming in computer time and memory. This paper describes how the results for utilities and choice probabilities can be obtained from the raw data, and instead of HB the empirical Bayes techniques can be applied. This approach enriches MaxDiff and is useful for estimations on large data sets. The results of analytical approach are compared with HB-MNL and several other techniques.

... The current work concentrates specifically on the so called Supercritical Pitchfork Bifurcation (SPB) model. The main reason is that this model can be obtained from a wide set of the sigmoid functions extensively used in applied regression modeling -for several examples, see the models of logistic and algebraic share and choice probability functions described in [41][42][43][44][45]. Another reason -in a diagram this model looks like a regular linear dependence, which at some point diverges into two additional streams. ...

... The characteristic function ) ( y f can be described as utility, usefulness, worth, importance, etc. In the assumptions of algebraic, linear or quadratic exponential behaviour of ) ( y f , the model (4) reduces to a power share model, Bradley-Terry model of paired comparison, or logit model, respectively [41][42][43][44][45][71][72][73]. Convenient for analytical investigation, simple quadratic share model (4) can be expressed explicitly as: ...

... Comparison of priorities in Thurstone scaling with the steady-state probabilities in Markov stochastic modeling was performed in (Lipovetsky, 2013a). Another approach used for evaluation of choice preferences by paired comparisons is the Bradley-Terry choice probability modeling via the maximum likelihood, and a similar estimation can be obtained via the eigenproblem solution related to the Chapman-Kolmogorov system of equations for the steady-state probabilities described in (Lipovetsky, 2008a). ...

The work considers statistical techniques developed for solving various special marketing research problems. These approaches include items comparisons in Thurstone and Bradley-Terry scaling, total unduplicated reach and frequency and Shapley value, sample balancing and price sensitivity analysis, customer satisfaction and identification of key drivers, best-worst and max-diff priority estimation, items cannibalization and synergy, and different other methods. The described techniques have been developed and employed in multiple marketing research projects, and they are helpful for successful solving various practical problems.

... Various other approaches to the choice modeling and decision making solved with logistic and MNL techniques include, for example, van Westendrop price sensitivity meter, and Bradley-Terry choice model (Lipovetsky, 2006b(Lipovetsky, , 2008b. In the large area of the multiple-criteria decision making, for example, in the Analytic Hierarchy Process (AHP) originated by T. Saaty (1980Saaty ( , 2005, the new extensions can be achieved with logit and MNL modeling as well (Lipovetsky, 2021c, d). ...

The work presents various techniques of the logistic and multinomial-logit modeling with their modifications. These methods are useful for regression modeling with a binary or categorical outcome, structuring in regression and clustering, singular value decomposition and principal component analysis with positive loadings, and numerous other applications. Particularly, these models are employed in the discrete choice modeling and the best-worst scaling known in applied psychology and socio-economics studies.

... The characteristic function ) ( y f can be described as utility, usefulness, worth, importance, and reduced to a paired comparison logistic model [33][34]. Convenient for analytical investigation, simple quadratic share model (4) can be expressed explicitly as: ...

Chaotic systems have been widely studied for description and explanation of various observed phenomena. The problem of statistical modeling for messy data can be attempted using the so called Supercritical Pitchfork Bifurcation (SPB) approach. This work considers the possibility of applying SPB technique to regression modeling of the implicit functions. Theoretical and practical advantages of SPB regression are discussed with an example from marketing research data on advertising in the car industry. Results are very promising, which can help in modeling, analysis, interpretation, and lead to understanding of the real world data.

... Discrete choice modeling by multiple predictors is widely used in regression analysis. Dichotomy response is often performed in the logistic approach [Long (1997); McCullagh and Nelder (1997); Ripley (1997); Lloid (1999); Lipovetsky (2006Lipovetsky ( , 2008aLipovetsky ( , 2010a]. Categorical variables with several outcomes have been developed in conditional and multinomial logits (MNLs) modeling and used in various applications [McFadden (1973[McFadden ( , 1981; Hausman and McFadden (1984); McFadden and Richter (1990); Ben-Akiva and Lerman (1985); Arminger et al. (1995); Wedel and Kamakura (1999); Louviere et al. (2000); Hastie et al. (2001); Train (2003); Berry et al. (2004); Bishop (2006); Lipovetsky (2008bLipovetsky ( , 2009a; Greene and Hensher (2010)]. ...

For a categorical variable with several outcomes, its dependence on the predictors is usually considered in the conditional or multinomial logit models. This work considers elasticity features of the binary and categorical logits and introduces the coefficients individual by observations. The paper shows that by a special rearrangement of data the more complicated conditional and multinomial models can be reduced to binary logistic regression. It suggests the usage of any software widely available for logit modeling to facilitate constructing for complex conditional and multinomial regressions. In addition, for binary logit, it is possible to obtain meaningful coefficients of regression by transforming data to the linear link function, which opens a possibility to obtain meaningful parameters of the complicated models with categorical dependent variables.

This paper considers methods of estimation of choice probability using Maximum Difference (MaxDiff) technique, also known as Best-Worst Scaling (BWS). The paper shows that on the aggregate level the choice probabilities can be obtained using analytical closed-form solution and other approaches such as Thurstone scaling, Bradley-Terry maximum likelihood, and Markov modeling via Chapman-Kolmogorov equations for steady-states probabilities. On the individual level, to account for the exact combinations presented in each task, the Cox hazard model is employed, as well as new approaches of least squares objective for maximum difference, and maximum likelihood in order statistics. The results are useful in the practical MaxDiff applications for items prioritization in marketing research.

There are many situations wherein a group of individuals (e.g., voters) must produce an ordered list of 'best' alternatives selected from a given group of alternatives (e.g., candidates). Standard approaches include ranked voting methods (RVMs) and methods of paired comparisons (MPCs). Typical 'ballots' for these approaches are distinctly different. Indeed, RVM ballots are simple rankings, with all unranked alternatives being considered inferior to all ranked alternatives. By comparison, MPC ballots are matrices whose off diagonal entries reflect the voter's opinion concerning only the row and column alternatives for that entry. Such methods generally do not require a voter to express an opinion concerning every pair of alternatives. In this paper we propose a straightforward methodology to allow voters to submit generalised ballots that can reflect the voter's opinions as precisely as those of MPC ballots, yet with the simplicity of traditional RVM ballots.

Analytic Hierarchy Process and its extensions for multiple criteria decision making.

This work considers maximum likelihood objectives for estimating the probability of each multivariate observation's assignment to one particular cluster or to one or more clusters. Combining both objectives yields a maximization of the total probability odds of belonging to one or another cluster. The gradient of the total odds objective can be reduced to the multinomial-logit probabilities leading to a convenient Newton–Raphson clustering procedure presented via an iteratively re-weighted least squares technique. Besides the total odds, several other new objectives are also considered, and numerical examples are discussed.

A new distribution, the gamma-half normal distribution, is proposed and studied. Various structural properties of the gamma-half normal distribution are derived. The shape of the distribution may be unimodal or bimodal. Results for moments, limit behavior, mean deviations and Shannon entropy are provided. To estimate the model parameters, the method of maximum likelihood estimation is proposed. Three real-life data sets are used to illustrate the applicability of the gamma-half normal distribution.

Price sensitivity analysis in the van Westendorp model is widely utilized in marketing research for concept and product pricing, but primarily as a descriptive statistic procedure. To extend abilities of the van Westendorp model, it can be presented by linear and non-linear systems of differential equations with the analytical solution in a set of ordinal logistic regressions. The results demonstrate an excellent approximation of the van Westendorp model. This approach produces a theoretical estimation of prices, their confidence intervals, elasticity, values for maximum response and maximum revenue. Statistical modeling significantly improves and facilitates price sensitivity analysis in application to product innovations and other marketing research problems.

N. T. Gridgeman has noticed a misplaced entry on p. 517 of the above referenced paper. The entry with rank sums 33, 33, 39, 39 and parameter values ·34, ·34, ·16, ·16 should have the value 13·450 for the test statistics B1 and be moved to the appropriate place in the right-hand half of p. 517. This change means that values of P for 28 entries beginning with rank sums 31, 34, 38, 41 down to rank sums 33, 35, 35, 41 should be decreased by ·0037. The value of P for the misplaced entry becomes ·2294 as it is moved to its proper position.
We are grateful to Mr. Gridgeman for pointing out this error but note that it is unlikely that it has produced any serious difficulties in practical applications of the procedures.

In credit risk management, the on-line analytical process has been accepted by most credit card issuers. The major tools used in such an OLAP are statistics and neural networks. Through a designed algorithm, the OLAP generates scores for each account or for each customer which depends on the level of the processing. Generally speaking, logistic regressions and feed-forward networks are the major players in OLAP of this field and usually are used separately. This paper discusses an approach — Dual-Model Scoring System — to combine these two major players and use them together in the credit scoring. Primarily, the classification problem for two classes are considered. By the Bayesian rule, the objective function of classification can be reduced to estimate the Bayesian posterior probability. Such a probability is estimated by using the MLE approach in logistic regressions and the Two-Stage (Gibbs) learning algorithm3 in feed-forward networks. The motivation of the proposal comes from the following two considerations: (1) Both logistic regression and neural networks have their advantages and disadvantages and the combined of these two can enhance their predictive ability and offset their weakness. (2) To reduce the false positive rate in the decision region. Besides the discussion of the architecture design of Dual-Model Scoring System, the paper has demonstrated the power of the present proposal in a real data set.

Thurstone scaling is a known tool for preference estimation in marketing and advertising research, and applied psychology. Positioning of the ranked items, or stimuli on a Thurstone scale are defined by the mean values of the quantiles related to frequencies of each stimilus' preference over the other stimuli. We describe a nonlinear extension of Thurstone scaling by the singular value decomposition of the skew-symmetric quantile matrix that yields the main pair of complex eigenvectors corresponding to conjugated priority-antipriority vectors. We also describe a generalized singular value decomposition that yields the robust dual priority vectors. Additionally we consider transformation of frequencies to ordinal preferences that permits application of graph theory and Gower plotting of qualitative dual priorities. The suggested approaches enrich possibilities of Thurstone scaling in practical applications of priority modeling and decision making.

Analytic Hierarchy Process (AHP) elicited data of pair comparison matrix transformed to a matrix of preference shares is considered by Chapman- Kolmogorov system of equations for discrete states. It yields a general dynamic solution and the steady-state probabilities. The priority vector can be interpreted as the eventual probabilities to belong to the discrete states corresponded to the compared items. The results of stochastic modeling correspond to a robust estimations of priority vectors.

A league with an equal number of teams in each of two divisions is considered. Each team plays every other one in its division g1 times and every team in the other division g2 times. Using the numbers of wins and losses in games between each pair of teams, several methods of ranking all of the teams are discussed and compared. A nonparametric ranking method based on the numbers of iterated wins and losses of each team is shown to have suitable theoretical properties. Analysis of National League baseball data (1973–1988) suggests that this method performs well in relation to ranking based on the classic Bradley–Terry parametric procedure. The nonparametric ranking method has the advantage of ease of computation and simplicity of explanation.

When paired comparisons are made sequentially over time as for example in chess competitions, it is natural to assume that the underlying abilities do change with time. Previous approaches are based on fixed updating schemes where the increments and decrements are fixed functions of the underlying abilities. The parameters that determine the functions have to be specified a priori and are based on rational reasoning. We suggest an alternative scheme for keeping track with the underlying abilities. Our approach is based on two components: a response model that specifies the connection between the observations and the underlying abilities and a transition model that specifies the variation of abilities over time. The response model is a very general paired comparison model allowing for ties and ordered responses. The transition model incorporates random walk models and local linear trend models. Taken together, these two components form a non-Gaussian state-space model. Based on recent results, recursive posterior mode estimation algorithms are given and the relation to previous approaches is worked out. The performance of the method is illustrated by simulation results and an application to soccer data of the German Bundesliga.

The approach of Grizzle, Starmer, and Koch (1969) to analyzing categorical data using linear and loglinear transformations is applied to the analysis of univariate paired and triple comparison experiments. This method of analysis produces noniterative weighted least-squares estimates of the preference ratings corresponding to the treatments under test and allows for testing hypotheses relative to their values within the framework of the general linear hypothesis. Further, the technique produces estimates of the preference ratings which are linear on the log scale within which Bradley (1965) defined his model. Examples with and without ties or order effects for paired comparison experiments and an example of a triple comparison experiment are presented.

Brief mention is made of several models in each of two categories: (I) Each possible ranking of items is assumed to have a “utility” which depends on the expected scores of the items in paired comparisons. In particular, the “worth” of an item may be defined in terms of its expected scores in comparisons with others. (II) Each item is assumed to have an intrinsic worth; these intrinsic worths determine the expected scores.A concept, “regularity” is introduced. Under (I), general linear utilities are discussed, and a necessary and sufficient condition is given in order that a linear utility may be regular.Under (II), a “minimum assumption” model is introduced. Let e(u, v) denote the expected score of an item of worth u when compared with one of worth v. The assumption is: e(u, v) is non-decreasing in u, non-increasing in v. The problem of estimating expected scores in this model is discussed.

A parametric distribution on permutations of k objects is derived from gamma random variables. The probability of a permutation is set equal to the probability that k independent gamma random variables with common shape parameter and different scale parameters are ranked according to that permutation. This distribution is motivated by considering a competition in which k players, scoring points according to independent Poisson processes, are ranked according to the time until r points are scored. The distributions obtained in this way include the popular Luce-Plackett and Thurstone-Mosteller-Daniels ranking models. These gamma-based distributions can serve as alternatives to the null ranking model in which all permutations are equally likely. Here, the gamma models are used to estimate the probability distribution of the order of finish in a horse race when only the probability of finishing first is given for each horse. Gamma models with shape parameters larger than 1 are found to be superior to the most commonly applied model (shape parameter 1). Examples are limited to small values of k because of the complicated calculations required to compute the distribution. Approximations that are easier to calculate are required before more extensive applications can be undertaken.

For latent class analysis, a widely known statistical method for the unmixing of an observed frequency table into several unobservable ones, a flexible model is presented in order to restrain the unknown class sizes (mixing weights) and the unknown latent response probabilities. Two systems of basic equations are stated such that they simultaneously allow parameter fixations, the equality of certain parameters as well as linear logistic constraints of each of the original parameters. The maximum likelihood equations for the parameters of this “linear logistic latent class analysis” are given, and their estimation by means of the EM algorithm is described. Further, the criteria for their local identifiability and statistical tests (Pearson- and likelihood-ratio-χ ) for goodness of fit are outlined. The practical applicability of linear logistic latent class analysis is demonstrated by three examples: mixed logistic regression, a mixed Bradley-Terry model for paired comparisons with ties, and a local dependence latent class model in which the departure from stochastic independence is covered by a single additional parameter per class.

Suppose that in a three-player tournament the strongest player, A1, meets the winner of the match between A2 and the weakest player, A3. It is shown that, for A1 and A2 of fixed strength, A1's chances of winning may be increased if A3 is replaced by a stronger player. The paradox is studied with special emphasis on probability models important in the method of paired comparisons.

Thurstone scaling is a widely used tool in marketing research, as well as in areas of applied psychology. The positions of the compared items, or stimuli on a Thurstone scale are estimated by averaging the quantiles corresponding to frequencies of each stimulus’s preference over the other stimuli. We consider maximum likelihood estimation for Thurstone scaling that utilizes paired comparison data. From this perspective we obtain a binary response regression with a probit or logit link. In addition to the levels on a psychological scale, the suggested approach produces standard errors, t-statistics, and other characteristics of regression quality. This approach can help in both the theoretical interpretation and the practical application of Thurstone modeling.

The Bradley-Terry model for a paired-comparison experiment with t treatments postulates a set of t ‘true’ treatment ratings π1, π2, · · ·, πt such that πi ≥ 0, ∑ πi = 1 and the probability for preferring treatment i to treatment j is πi(πi + πj). Thus, according to this model, every comparison of two treatments results in a definite preference for one of the two. This is an unrealistic restriction since when there is no difference between the responses due to two treatments, any method of expressing preference for one over the other is somewhat arbitrary. This paper considers a modification of the Bradley-Terry model by introducing an additional parameter, called threshold parameter, into the model. This permits ‘ties’ in the model. The problem of estimation and tests of hypotheses for the parameters of the modified model is also dealt with in the paper.

The problem is to define: P(x) := {Sigma}{sub k = 1}{sup {infinity}} arctan (x - 1/(k + x + 1) {radical}(k + 1) + (k + 2) {radical}(k + x)). (1) (a) Find explicit, finite-expression evaluations of P(n) for all integers n {ge} 0. (b) Show {tau} := lim{sub x {yields} -1{sup +}} P(x) exists, and find an explicit evaluation for {tau}. (c) Are there a more general closed forms for P, say at half-integers? Solution with the abbreviations: r := {radical} (k + 1), s := {radical} (k + x) the argument of arctan in (1) becomes s{sup 2} - r{sup 2}/(s{sup 2} + 1) r + (r{sup 2} + 1) s = s - r/r s + 1 = 1/r - 1/s / 1 + 1/r 1/s.

Ranking data is commonly used in marketing and advertising research for priority estimation among the compared items by Thurstone scaling. Rating data is also often used in TURF, or total unduplicated reach and frequency analysis to find the best items. Both ranks and rates data sets can be elicited and utilized simultaneously to obtain a combined preference estimation. This work develops several techniques of priority evaluation. It considers maximum likelihood of the order statistics for the ranking data with the probit, logit, and multinomial links for the Thurstone scale. Non-linear optimization with the least squares or maximum likelihood objective is introduced for TURF modeling. Combined estimation by both rank and rate data is suggested in singular value decomposition and Geary-Khamis equation approaches. The proposed methods produce priorities among the compared items and probabilities of their choice.

A pairwise comparison matrix of the Analytic Hierarchy Process (AHP) is considered as a contingency table that helps to identify unusual or false data elicited from a judge. Special techniques are suggested for robust estimation of priority vectors. They include transformation of a Saaty matrix to matrix of shares of preferences and solving an eigenproblem designed for the transformed matrices. We also introduce an optimizing objective that produces robust priority estimation. Numerical results are compared using the AHP with these differing approaches. The comparison demonstrates that robust estimations yield priority vectors not prone to influence of possible errors among the elements of a pairwise comparison matrix.

( This reprinted article originally appeared in Psychological Review, 1927, Vol 34, 273–286. The following is a modified version of the original abstract which appeared in PA, Vol 2:527. ) Presents a new psychological law, the law of comparative judgment, along with some of its special applications in the measurement of psychological values. This law is applicable not only to the comparison of physical stimulus intensities but also to qualitative judgments, such as those of excellence of specimens in an educational scale. The law is basic for work on Weber's and Fechner's laws, applies to the judgments of a single observer who compares a series of stimuli by the method of paired comparisons when no "equal" judgments are allowed, and is a rational equation for the method of constant stimuli.

This study is concerned with the extension of the Bradley-Terry model for paired comparisons to situations which allow an expression of no preference. A new model is developed and its performance compared with a model proposed by Rao and Kupper. The maximum likelihood estimates of the parameters are found using an iterative procedure which, under a weak assumption, converges monotonically to the solution of the likelihood equations. It is noted that for a balanced paired comparison experiment the ranking obtained from the maximum likelihood estimates agrees with that obtained from a scoring system which allots two points for a win, one for a tie and zero for a loss. The likelihood ratio test of the hypothesis of equal preferences is shown to have the same asymptotic efficiency as that for the Rao-Kupper model. Two examples are presented, one of which introduces a set of data for an unbalanced paired comparison experiment. Initial applications of the test of goodness of fit suggest that the proposed model yields a reasonable representation of actual experimentation.

Ranking is a process of prioritization. Priorities, as measurement rather than pure guessing, can be derived from paired comparison judgments that generalize on ratios of actual measurements. Paired comparisons involve the selection of the smaller of the two objects being compared as the unit and estimating how many multiples of that unit the larger object is with respect to an attribute they share. In this paper, it is shown how priorities are derived as the principal right eigenvector of a pairwise comparison matrix and several examples are given to illustrate how the process works.

In this paper, different types of saddle pairs of vector-valued functions are investigated. Main properties of these pairs are examined. Structure of images of saddle pair sets is found and these images are constructed by means of cone extreme points in an explicit way as well. The obtained results provide possibility to construct dual problems in general cases of multiple objective problems and to investigate how to solve them using saddle pairs approach.

Thurstone scaling is widely used for presenting the priorities among the compared items. The mean values of the quantiles corresponding to frequencies of each stimilus’ preference over the other stimuli define the items’ locations on the psychological continuum of the Thurstone scale. This paper considers an extension of the scale levels to the aggregates of the independent covariates. In a sense, it is similar to a multiple regression extension of the mean value of the dependent variable to its conditional mean expressed by the linear aggregate of the independent variables. A maximum likelihood objective constructed by the probabilities of the order statistics applied to the ranked or paired comparison data is suggested. Probit, logit and multinomial links are tried to obtain the Thurstonian scale exposition by the covariates, and to estimate probabilities of the items’ choice. This approach is very convenient and can substantially enrich both theoretical interpretation and practical application of Thurstone modelling, particularly in marketing and advertising research.

Although verbal and numerical scales are commonly used in the Analytic Hierarchy Process for pairwise comparisons, new experiments with computer-based visual tools confirm that ratio preferences can be effectively and efficiently elicited using adjustable visual tools and simultaneous comparisons, as well. Such an improved approach shows promise in the design of a new class of multi-criteria decision support system both for individual and group decisions.

This paper is concerned with the problems of the development of a subjective metric for the study of values. Measurement areas discussed are, social attitudes, propaganda, moral values, experimental semantics, market research, economic research, and aesthetics. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

This article introduces novel statistical models for the sequence analysis of events. The models are formulated to analyze occurrence, association, and sequencing among events as an extension of log-linear models. A set of parameters characterizes marginal odds and odds ratios of frequencies summed across sequence patterns for each combination of the occurrence/nonoccurrence of events. These parameters are used for the analysis of the occurrence and association of events. Another set of parameters characterizes conditional odds and odds ratios among sequence patterns within each combination of the occurrence/nonoccurrence of events. These parameters are used for the analysis of sequencing of events. The models permit a decomposition of the likelihood function into a marginal likelihood component that includes only parameters for occurrence and association among events and a conditional likelihood component that includes only parameters for sequencing among events. The models are then extended further for regressions with covariates. An application analyzes gender and racial/ethnic differences in patterns of drug use progression. Sequential patterns of initiations and association among initiations are analyzed for three groups of drugs: alcoholic beverages, cigarettes, and marijuana. Findings that cross-validate previous findings based on different datasets and findings that are novel are reported.

& Decision Making2

- T L Saaty
- M Ozdemir

The American Mathematical Monthly64

- L R Ford

The American Statistician44

- R A Groeneveld

& Decision Making1

- Y Lin

Journal of Statistical Theory and Practice1

- S Lipovetsky

& Decision Making5

- S Zahir

- F S Hillier
- G J Lieberman

F. S. Hillier and G. J. Lieberman, Introduction to Operations Research (Holden-Day,
San Francisco, 1974).

- S Lipovetsky
- M Conklin

S. Lipovetsky and M. Conklin, Thurstone scaling via binary response regression, Statistical Methodology 1 (2004) 93-104.