## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Let λ=(λ1,…,λn), λ1 ⩽…⩽λn, and . A function is said to be arrangement increasing (AI) if (i) ƒ is permutation invariant in both arguments λ and , and (ii) ) ⩾ whenever x and x′ differ in two coordinates only, say i and j, (xi-xj)(i–j)⩾0, and xi′=xj, xj′=xi. This paper reviews concepts and many of the basic properties of AI functions, their preservation properties under mixtures, compositions and integral transformations. The AI class of functions includes as special cases other well-known classes of functions such as Schur functions, totally positive functions of order two and positive set functions. We present a number of applications of AI functions to problems in probability, statistics, reliability theory and mathematics. A multivariate extension of the arrangement ordering is also reviewed.

To read the full-text of this research,

you can request a copy directly from the authors.

... Arrangement monotone functions have been receiving more and more attention from researchers in risk management and operations research. Interested readers may refer to [33] for comprehensive properties on arrangement monotone functions. ...

It has become a common understanding that financial risk can spread rapidly from one institution to another, and the stressful status of one institution may finally result in a systemic crisis. One popular method to assess and quantify the risk of contagion is employing the co-risk measures and risk contribution measures. It is interesting and important to understand how the underlining dependence structure and magnitude of random risks jointly affect systemic risk measures. In this paper, we mainly focus on the conditional value-at-risk, conditional expected shortfall, the delta conditional value-at-risk, and the delta conditional expected shortfall. Existing studies mainly focus on the situation with two random risks, and this paper makes some contributions by considering the scenario with possibly more than two random risks. By employing the tools of stochastic order, positive dependence concepts and arrangement monotonicity, several results concerning the usual stochastic order, increasing convex order, dispersive order and excess wealth order are presented. Concisely speaking, it is found that for a large enough stress level, a larger random risk tends to lead to a more severe systemic risk. We also performed some Monte Carlo experiments as illustrations for the theoretical findings.

This paper analyses the total tardiness minimization in a flowshop with multiple processors at each stage. While there is considerable research to minimize the makespan, very little work is reported on minimizing the total tardiness for this problem. This research focuses on heuristic methods that consider this environment as a series of parallel machine problems. New dispatching rules are introduced. One of the proposed rules is able to deal with jobs that will come afterwards and not only the available jobs at the decision time. Dispatching rules are also associated with classical (forward and backward) and new list scheduling algorithms. A special scheduling algorithm able to deal with idle times is proposed. Computational experiments in a set of 4,320 literature instances show that the developed heuristics are competitive and outperforms their classical counterparts.

We consider a problem of scheduling n independent jobs on m unrelated parallel machines with the objective of minimizing total tardiness. Processing times of a job on different machines may be different on unrelated parallel-machine scheduling problems. We develop several dominance properties and lower bounds for the problem, and suggest a branch and bound algorithm using them. Results of computational experiments show that the suggested algorithm gives optimal solutions for problems with up to five machines and 20 jobs in a reasonable amount of CPU time.

This paper focuses on a production-scheduling problem in a printed circuit board (PCB) manufacturing system that produces multiple product types with different due dates and different manufacturing processes. In the PCB manufacturing system, there is a number of serial workstations, and there are multiple parallel machines at each workstation. Also, setup operations are required at certain workstations or machines, and some product types have re-entrant flows. We develop new dispatching rules for scheduling at each workstation, considering the special features of PCB manufacturing. With the dispatching rules, we determine not only the start time of each lot at a machine but also the batch size of each product at each machine. Simulation experiments are carried out to test the performance of the production-scheduling method and dispatching rules devised in this study. Results show that the production-scheduling method suggested in this study performs better than methods with well-known dispatching rules and heuristic algorithms for lot sizing in terms of the total tardiness of orders.

The resource-constrained project scheduling problem (RCPSP) can be given as follows. A single project consists of a set J = {0,1,…, n, n +1} of activities which have to be processed. Fictitious activities 0 and n + 1 correspond to the “project start” and to the “project end”, respectively. The activities are interrelated by two kinds of constraints. First, precedence constraints force activity j not to be started before all its immediate predecessor activities comprised in the set P
j
have been finished. Second, performing the activities requires resources with limited capacities. We have k resource types, given by the set K = {1,…,K}. While being processed, activity j requires r
j,k
units of resource type k ∈ K during every period of its non-preemptable duration p
j
. Resource type k has a limited capacity of R
k
at any point in time. The parameters pj,r
j,k
, and R
k
are assumed to be deterministic; for the project start and end activities we have pj = 0 and r
j,k
= 0 for all k ∈ K. The objective of the RCPSP is to find precedence and resource feasible completion times for all activities such that the makespan of the project is minimized. Figure 7:1 gives an example of a project comprising n = 6 activities which have to be scheduled subject to K = 1 renewable resource type with a capacity of 4 units. A feasible schedule with an optimal makespan of 13 periods is represented in Figure 7:2.

This paper focuses on a scheduling problem in a manufacturing system composed of multiple parallel assembly lines. There are
multiple orders to be processed in this system, and each order is specified by the product type, the number of products to
be processed, and the due date. Each product is composed of two types of subassemblies, one unit of an external subassembly
and one or more units of an internal subassembly. In the system, the parallel assembly lines are not identical, and certain
lines are designated for certain product types. We present heuristic algorithms for the scheduling problem with the objective
of minimizing total tardiness of orders. For an evaluation of the performance of the suggested algorithms, computational experiments
are performed on a number of problem instances and results show that the suggested algorithms work better than the method
used in a real manufacturing system.
KeywordsScheduling-Heuristics-Uniform parallel lines-Job splitting-Tardiness

This paper focuses on the scheduling problem of minimizing makespan for a given set of jobs in a two-stage hybrid flowshop subject to a product-mix ratio constraint. There are identical parallel machines at the first stage of the hybrid flowshop, while there is a single batch-processing machine at the second stage. Ready times of the jobs (at the first stage) may be different, and a given product-mix ratio of job types should be kept in each batch at the second stage. We present three types of heuristic algorithms: forward scheduling algorithms, backward scheduling algorithms, and iterative algorithms. To evaluate performance of the suggested algorithms, a series of computational experiments are performed on randomly generated test problems and results are reported.

Thesis (Ph. D. in Industrial Engineering and Operations Research)--University of California, Berkeley, Dec. 1992. Includes bibliographical references (leaves 87-95).

The invisible hand theorem relates nothing about the attributes of the optimal allocation vector. In this paper, we identify a convex cone of functions such that order on vectors of exogenous heterogeneity parameters induces component-wise order on allocation vectors for firms in an efficient market. By use of functional analysis, we then replace the vectors of heterogeneities with asymmetries in function attributes such that the induced component-wise order on efficient allocations still pertains. We do so through integration over a kernel in which the requisite asymmetries are embedded. Likelihood ratio order on the measures of integration is both necessary and sufficient to ensure component-wise order on efficient factor allocations across firms. Upon specializing to supermodular functions, familiar stochastic dominance orders on normalized measures of integration provide necessary and sufficient conditions for this component-wise order on efficient allocation. The analysis engaged in throughout the paper is ordinal in the sense that all conclusions drawn are robust to monotone transformations of the arguments in production.

Producers are subject to similar production risks, and so their outputs are likely correlated. Using the entire data-set rather than summary statistics, we study an ordinal definition of systematic risk. For risk-neutral producers in perfect competition, we trace the effects of an increase in systematic risk through to impacts on welfare measures and production decisions. Expected welfare falls under more systematic risk, but either of expected producer surplus or expected consumer surplus may rise. Our definition of systematic risk also has relevance for the incentive to incur R&D expenditures, the benefits of diversity and the gains from risk-sharing. Copyright The London School of Economics and Political Science 2003.

We introduce in this paper a new measure of component importance, called redundancy importance, in coherent systems. It is a measure of importance for the situation in which an active redundancy is to be made in a coherent system. This measure of component importance is compared with both the (Birnbaum) reliability importance and the structural importance of a component in a coherent system. Various models of component redundancy are studied, with particular reference to k/out / of / n systems, parallel-series systems, and series-parallel systems.

A real-valued function $g$ of two vector arguments $\mathbf{x}$ and $\mathbf{y} \in R^n$ is said to be arrangement increasing if it increases in value as the arrangement of components in $\mathbf{x}$ becomes increasingly similar to the arrangement of components in $\mathbf{y}$. Hollander, Proschan and Sethuraman (1977) show that the convolution of arrangement increasing functions is arrangement increasing. This result is used to generate some interesting probability inequalities of a geometric nature for exchangeable random vectors. Other geometric inequalities for families of arrangement increasing multivariate densities are also given, and some moment inequalities are obtained.

Let $\mathbf{\lambda} = (\lambda_1, \cdots, \lambda_n), \lambda_1 \leqq \cdots \leqq \lambda_n$, and $\mathbf{x} = (x_1, \cdots, x_n)$. A function $g(\mathbf{\lambda, x})$ is said to be decreasing in transposition (DT) if (i) $g$ is unchanged when the same permutation is applied to $\mathbf{\lambda}$ and to $\mathbf{x}$, and (ii) $g(\mathbf{\lambda, x}) \geqq g(\mathbf{\lambda, x}')$ whenever $\mathbf{x}'$ and $\mathbf{x}$ differ in two coordinates only, say $i$ and $j, (x_i - x_j) \cdot (i - j) \geqq 0$, and $x_i' = x_j, x_j' = x_i$. The DT class of functions includes as special cases other well-known classes of functions such as Schur functions, totally positive functions of order two, and positive set functions, all of which are useful in many areas including stochastic comparisons. Many well-known multivariate densities have the DT property. This paper develops many of the basic properties of DT functions, derives their preservation properties under mixtures, compositions, integral transformations, etc. A number of applications are then made to problems involving rank statistics.

This is Part II of a two-part paper. The main purpose of this two-part paper is (a) to develop new concepts and techniques in the theory of majorization and Schur functions, and (b) to obtain fruitful applications in probability and statistics. In Part II we introduce a stochastic version of majorization, develop its properties, and obtain multivariate applications of both the preservation theorem of Part I and the new notion of stochastic majorization. This leads to a definition of Schur families of multivariate distributions. Generalizations are obtained of earlier results of Olkin and of Wong and Yue; in addition, new results are obtained for the multinomial, multivariate negative binomial, multivariate hypergeometric, Dirichlet, negative multivariate hypergeometric, and multivariate logarithmic series distributions.

This is Part I of a two-part paper; the purpose of this two-part paper is (a) to develop new concepts and techniques in the theory of majorization and Schur functions, and (b) to obtain fruitful applications in probability and statistics. The main theorem of Part I states that if $f(x_1, \cdots, x_n)$ is Schur-concave, and if $\phi(\lambda, x)$ is totally positive of order 2 and satisfies the semigroup property for $\lambda_1 > 0, \lambda_2 > 0: \phi(\lambda_1 + \lambda_2, y) = \int \phi(\lambda_1, x)\phi(\lambda_2, y - x) d\mu(x)$, where $\mu$ is Lebesgue measure on $\lbrack 0, \infty)$ or counting measure on $\{0, 1, 2, \cdots\}$, then $h(\lambda_1, \cdots, \lambda_n) \equiv \int \cdots \int \Pi^n_1 \phi(\lambda_i, x_i)f(x_1, \cdots, x_n) d\mu(x_1) \cdots d\mu(x_n)$ is also Schur-concave. This theorem is then applied to obtain renewal theory results, moment inequalities, and shock model properties.

This paper is devoted to a model which has applications in the study of reliability, extinction of species, inventory depletion, and urn sampling, among others. A series-parallel system consists of (k+1) subsystems C0,C1,⋯ ,Ck, also called cut sets. Cut set Ci contains ni components arranged in parallel, i=0,1,⋯ ,k. No two cut sets have a component in common. It is assumed that after t components have failed, each of the remaining components is equally likely to fail, t=0,1,⋯ . It is also assumed that the components fail one at a time. In Part 1 of this paper we study the probability that the system fails because a specified cut set C0, say, fails first. We obtain several alternative expressions and recurrence relations for this probability. Some of these formulae are useful for computation, while others permit us to derive qualitative features such as monotonicity, Schur-concavity, asymptotic limits, etc. These results are extended to cover the case in which some cut set first drops to level a, where a is a specified positive integer. In Part 2, we compute the probability distribution, frequency function, and failure rate of the lifelengths of series-parallel systems. We also obtain corresponding recurrence relations and finite and asymptotic properties.

A computationally convenient way to find average tau is pointed out, and the possibility is raised of simple chi-square tests of significance for average τ as well as Kendall's coefficient of agreement μ. A method for obtaining the rank order of "best fit" to a group of rankings in terms of average tau is also discussed. Finally, an "analysis of agreement" is proposed based upon a partition of average tau into agreement within and between experimental groups.

It is shown that if a ( t ) = ( a 1 ( t ) , a 2 ( t ) , … , a n ( t ) ) , t = 1 , … , m {a^{(t)}} = (a_1^{(t)},a_2^{(t)}, \ldots ,a_n^{(t)}),t = 1, \ldots ,m , are nonnegative n n -tuples, then the maxima of ∑ i = 1 n a i ( 1 ) a i ( 2 ) ⋯ a i ( m ) \sum \nolimits _{i = 1}^n {a_i^{(1)}a_i^{(2)} \cdots a_i^{(m)}} of ∏ i = 1 n min t ( a i ( t ) ) \prod \nolimits _{i = 1}^n {{{\min }_t}(a_i^{(t)})} and of Σ i = 1 n \Sigma _{i = 1}^n min ( a i ( t ) ) (a_i^{(t)}) , and the minima of ∏ i = 1 n ( a i ( 1 ) + a i ( 2 ) + ⋯ + a i ( m ) ) \prod \nolimits _{i = 1}^n {(a_i^{(1)} + a_i^{(2)} + } \cdots + a_i^{(m)}) , of ∏ i = 1 n max t ( a i ( t ) ) \prod \nolimits _{i = 1}^n {{{\max }_t}(a_i^{(t)})} and of ∑ i = 1 n max t ( a i ( t ) ) \sum \nolimits _{i = 1}^n {{{\max }_t}(a_i^{(t)})} are attained when the n n -tuples a ( 1 ) , a ( 2 ) , … , a ( m ) {a^{(1)}},{a^{(2)}}, \ldots ,{a^{(m)}} are similarly ordered. Necessary and sufficient conditions for equality are obtained in each case. An application to bounds for permanents of ( 0 , 1 ) (0,1) -matrices is given.

The be k exponential populations. The problem of subset selection for these k populations is formulated in order to accommodate randomly censored observations. A selection procedure is proposed, based on the maximum likelihood estimators of θi i = 1, 2, … , k. It is shown that the is independent of the meanlife ξ of the censoring distribution and that the specification of P - value depends on k. This dependency is one of the distinct features that is inherent to the selection procedure under random censoring. Some desirable properties and the condition under which Gupta's (1963) multiplicative constants can be used in the proposed selection procedure are discussed. In the context of reliability, one can employ the same procedure to select the best population with respect to the hazard rate or system reliability.

We are concerned with the following reliability problem: A system has k different types of components. Associated with each component is a numerical value. Let {aj}, (j= 1,…,k), denote the set of numerical values of the k components. Let R(a1,…, ak) denote the probability that the system will perform satisfactorily (i. e., R(a1,…. ak) is the reliability of the system) and assume R(a1,…,ak) has the properties of a joint cumulative distribution function.
Now suppose aj1 ≤… ≤ ajn are n components of type j (j= 1,…, k). Then n systems can be assembled from these components. Let N denote the number of systems that perform satisfactorily. N is a random variable whose distribution will depend on the way the n systems are assembled. Of all different ways in which the n systems can be assembled, the paper shows that EN is maximized if these n systems have reliability R(a1j,…, akj) (i = 1,…,n). The method used here is an extension of a well known result of Hardy, Littlewood, and Polya on sums of products. Furthermore, under certain conditions, the same assembly that maximizes EN minimizes the variance of N.
Finally, for a similar problem in reliability, it is shown that for a series system a construction can be found that not only maximizes the expected number of functioning modules, but also possesses the stronger property of maximizing the probability that the number of functioning modules is at least r, for each 0 ≤ r ≤ n.

A real valued function of s vector arguments in Rn is said to be arrangement increasing if the function increases in value as the components of the vector arguments become more similarly arranged. Various examples of arrangement increasing functions are given including many joint multivariate densities, measures of concordance between judges and the permanent of a matrix with nonnegative components. Preservation properties of the class of arrangement increasing functions are examined, and applications are given including useful probabilistic inequalities for linear combinations of exchangeable random vectors.

This paper deals with some multiple decision (ranking and selection) problems. Some relevant distribution theory is given and the associated confidence bounds are derived for the differences (ratios) between the parameters. The selection procedures select a non-empty, small, best subset such that the probability is at least equal to a specified value P* that the best population is selected in the subset. General results are given both for the unknown location and scale parameters of the k populations. Some desirable properties of these procedures are studied and proved. Selection of a subset to contain all populations better than a standard is also discussed. Performance characteristics of some procedures for the normal means problem are studied and tables are given for the probabilities of selecting the ith ranked population and for the expected proportion and the expected average rank in the selected subset. A brief review of work by other authors in the problems of selection and ranking and in other related problems is given.

This paper is concerned with a single-sample multiple decision procedure for ranking means of normal populations with known variances. Problems which conventionally are handled by the analysis of variance (Model I) which tests the hypothesis that $k$ means are equal are reformulated as multiple decision procedures involving rankings. It is shown how to design experiments so that useful statements can be made concerning these rankings on the basis of a predetermined number of independent observations taken from each population. The number of observations required is determined by the desired probability of a correct ranking when certain differences between population means are specified.

A procedure is given for selecting a subset such that the probability that all the populations better than the standard are included in the subset is equal to or greater than a predetermined number $P^{\ast}$. Section 3 deals with the problem of the location parameter for the normal distribution with known and unknown variance. Section 4 deals with the scale parameter problem for the normal distribution with known and unknown mean as well as the chi-square distribution. Section 5 deals with binomial distributions where the parameter of interest is the probability of failure on a single trial. In each of the above cases the case of known standard and unknown standard are treated separately. Tables are available for some problems; in other problems transformations are used such that the given tables are again appropriate.

There are given a populations $\Pi_1, \cdots, \Pi_a$, of which we wish to select a subset. The quality of the $i$th population is characterized by a real-valued parameter $\theta_i$, and a population is said to be \begin{align*}\tag{1} positive \quad \text{(or} \quad good) \quad \text{if} \quad \theta_i &\geqq \theta_0 + \Delta, \\ \tag{2} negative \quad {(or} \quad bad) \quad \text{if} \quad \theta_i &\leqq \theta_0,\end{align*} where $\Delta$ is a given positive constant and $\theta_0$ is either a given number or a parameter that may be estimated. A number of optimum properties of selection procedures are defined (Section 3) and it is shown that for some of these, the optimum procedure selects $\Pi_i$ when \begin{equation*}\tag{3}T_i \leqq C_i,\end{equation*} where $T_i$ is a suitable statistic, the distribution of which depends only on $\theta_i$, and where $C$ is a suitable constant. (Sections 4 and 6.) Applications are given to distributions with monotone likelihood ratio in the case that $\theta_0$ is known (Sections 5 and 6), and to normal distributions when instead observations on $\theta_0$ are included in the experiment (Sections 10 and 11).

Problems involving dependent pairs of variables $(X, Y)$ have been studied most intensively in the case of bivariate normal distributions and of $2 \times 2$ tables. This is due primarily to the importance of these cases but perhaps partly also to the fact that they exhibit only a particularly simple form of dependence. (See Examples 9(i) and 10 in Section 7.) Studies involving the general case center mainly around two problems: (i) tests of independence; (ii) definition and estimation of measures of association. In most treatments of these problems, there occurs implicitly a concept which is of importance also in other contexts (for example, the evaluation of the performance of certain multiple decision procedures), the concept of positive (or negative) dependence or association. Tests of independence, for example those based on rank correlation, Kendall's $t$-statistic, or normal scores, are usually not omnibus tests (for a discussion of such tests see [4], [15] and [17], but designed to detect rather specific types of alternatives, namely those for which large values of $Y$ tend to be associated with large values of $X$ and small values of $Y$ with small values of $X$ (positive dependence) or the opposite case of negative dependence in which large values of one variable tend to be associated with small values of the other. Similarly, measures of association are typically designed to measure the degree of this kind of association. The purpose of the present paper is to give three successively stronger definitions of positive dependence, to investigate their consequences, explore the strength of each definition through a number of examples, and to give some statistical applications.

Let $\mathbf{X} = (X_1,\cdots, X_n)$ have a density $g(\mathbf{x}, \lambda)$ which is decreasing in transposition, where $\lambda = (\lambda_1,\cdots, \lambda_n)$. Suppose one wishes to select a subset of $\{1,\cdots, n\}$ containing the subscripts associated with the largest values of the $\lambda_i$'s. Let $S$ be a permutation invariant selection rule which is more likely to select a subset associated with the largest values of the $X_i$'s. Let $A = \{i(1),\cdots, i(k)\} \subset \{1,\cdots, n\}$ and $B = \{j(1),\cdots, j(k)\} \subset \{1,\cdots, n\}$ be such that $\lambda_{i(s)} \geq \lambda_{j(s)}, s = 1,\cdots, k$. The following five inequalities are proved for nonrandomized selection rules. $(|D|$ denotes the number of elements in $D$. $D^c$ denotes the complement of $D$.) $P_\lambda(|S \cap A| \geq (>)m) \geq P(|S \cap B| \geq (>)m)$ for every $m \in R^1, P_\lambda(|S \cap A^c| \leq (<)m) \geq P_\lambda(|S \cap B^c| \leq (<)m)$ for every $m \in R^1$, and $P_\lambda(S = A) \geq P_\lambda(S = B)$. Inequalities for randomized selection rules are also obtained. These generalized monotonicity properties are derived using a unified approach. The results apply to selection rules proposed under several formulations of the selection problem.

Typescript. Thesis (Ph. D.)--Florida State University, 1989. Includes bibliographical references.

We develop a unified theory for obtaining stochastic rearrangement inequalities and show how this theory may be applied in statistical contexts such as ranking problems, hypothesis testing, contamination models, and optimal assembly of systems.

Contributions to the theory of arrangement increasing functions. Ph.D. Dissertation, Florida State Univ Schur functions in statistics. I: The preservation theorem

- M A Proschan

Proschan M. A. (1989) Contributions to the theory of arrangement increasing functions. Ph.D. Dissertation, Florida State Univ. Proschan F. and Sethuranan J. (1977) Schur functions in statistics. I: The preservation theorem. Ann. Statist. 5, 256-262.

Inequalities, 2nd edn. Cambridge Univ. Press, Cambridge A note on average tau as a measure of concordance

- G H Hardy
- J E Littlewood
- G P61ya
- W L Hays

Hardy G. H., Littlewood J. E. and P61ya G. (1952) Inequalities, 2nd edn. Cambridge Univ. Press, Cambridge. Hays W. L. (1960) A note on average tau as a measure of concordance. J. Am. Statist. Assoc. 55, 331-341.