## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

A new family of pseudo-random number generators, the ACORN (additive congruential random number) generators, is proposed. The resulting numbers are distributed uniformly in the interval [0, 1). The ACORN generators are defined recursively, and the (k + 1)th order generator is easily derived from the kth order generator. Some theorems concerning the period length are presented and compared with existing results for linear congruential generators. A range of statistical tests are applied to the ACORN generators, and their performance is compared with that of the linear congruential generators and the Chebyshev generators. The tests show the ACORN generators to be statistically superior to the Chebyshev generators, while being statistically similar to the linear congruential generators. However, the ACORN generators execute faster than linear congruential generators for the same statistical faithfulness. The main advantages of the ACORN generator are speed of execution, long period length, and simplicity of coding.

To read the full-text of this research,

you can request a copy directly from the author.

... The ACORN generators were first proposed in [8] in 1989. Subsequent papers over an extended period [9,5,10] have suggested that the ACORN approach compares favourably with some other commonly used approaches, in particular the linear congruential generators. ...

... The kth-order Additive Congruential Random Number (ACORN) generator is defined in [8,9] from an integer modulus M, an integer seed Y 0 0 satisfying 0 < Y 0 0 < M and an arbitrary set of k integer initial values Y m 0 , m = 1, . . . , k, each satisfying 0 ≤ Y m 0 < M, through the equations ...

... The original implementation proposed in [8] used real arithmetic modulo 1, calculating the X m n directly. Owing to the effects of rounding errors in real arithmetic the sequences were not reproducible on different machines or with different compilers, although the sequences still exhibited similar statistical behaviour; consequently, although period lengths were large, they could not be predicted or determined with any certainty; finally, it was not possible to make a clear and unambiguous statement of how best to initialise the generator. ...

Additive Congruential Random Number (ACORN) generators represent an approach to generating uniformly distributed pseudo-random numbers that is straightforward to implement efficiently for arbitrarily large order and modulus; if it is implemented using integer arithmetic, it becomes possible to generate identical sequences on any machine.This paper briefly reviews existing results concerning ACORN generators and relevant theory concerning sequences that are well distributed mod 1 in k dimensions. It then demonstrates some new theoretical results for ACORN generators implemented in integer arithmetic with modulus M=2μ showing that they are a family of generators that converge (in a sense that is defined in the paper) to being well distributed mod 1 in k dimensions, as μ=log2M tends to infinity. By increasing k, it is possible to increase without limit the number of dimensions in which the resulting sequences approximate to well distributed.The paper concludes by applying the standard TestU01 test suite to ACORN generators for selected values of the modulus (between 260 and 2150), the order (between 4 and 30) and various odd seed values. On the basis of these and earlier results, it is recommended that an order of at least 9 be used together with an odd seed and modulus equal to 230p, for a small integer value of p. While a choice of p=2 should be adequate for most typical applications, increasing p to 3 or 4 gives a sequence that will consistently pass all the tests in the TestU01 test suite, giving additional confidence in more demanding applications.The results demonstrate that the ACORN generators are a reliable source of uniformly distributed pseudo-random numbers, and that in practice (as suggested by the theoretical convergence results) the quality of the ACORN sequences increases with increasing modulus and order.

... Additive generators calculate each number as some additive combination of the previous n numbers in the sequence. R. S. Wikramaratna [175,176,177] proposed the k th order ACORN (additive congruential random number) generator X k j , a more general recursive method than the linear congruential, which combines the previous number in the sequence with a corresponding number from the (k − 1) th order sequence. X k j is defined recursively from a seed X 0 0 (0 < X 0 0 < 1) and a set of k initial values X m 0 , m=1, ..., k each satisfying 0 ≤ X m 0 ≤ 1 by: ...

... Some other generators, as the additive method presented by Green, Smith and Klem [73] allows some theoretical analysis as well. Wikramaratna [175] shows some theoretical results for his additive congruential generator. The interested reader can check those references for further explanations of the tests. ...

... Additive Congruential Method (acorn): This is the generator proposed by Wikramaratna [175] in real arithmetic. The seeds used must be real values between 0 and 1. ...

A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mining Engineering, Department of Civil and Environmental Engineering. Thesis (Ph.D.)--University of Alberta, 2003. Includes bibliographical references.

... then f belongs to a rather large class of functions provided P is 1. This general form is identical, for instance, to that presented by Wikramaratna (1989) in which f provides an additive congruential relationship. Pickover (1995) asserts that almost any function can be used to yield uniformly distributed random numbers in the interval, ]0,1[, and offers the following specific algorithm as one example (the ''Cliff'' RNG): ...

... A consideration is the quality of random numbers obtained based on some function, such as the Cliff or cosine generators, in comparison to other published algorithms. Generators chosen for comparison are Schrage (1979), Wikramaratna (1989); as implemented in Deutsch and Journel (1992), and Marsaglia (1972); as implemented in (Deutsch and Journel, 1992). Scatterplots (Fig. 2) and quantiles (Table 1) are similar for all algorithms. ...

A wide variety of random number generators is discussed based on truncating functional outcomes and considering the fractional remainders as random digits in the interval, ]0,1[. These generators do not require seeding in the traditional sense, moreover offer an infinite number of outcomes, apparently without periodicity. These generators are trivial in their software implementation. Those that are based on logarithms perform best in tests of randomness. When applied for spatial simulation, though, quality of the random number generator seems unimportant to the outcome.

... The fact that centro-invertible matrices arise naturally in a real application makes them worthy of study. The ACORN pseudo-random number generator was first proposed by Wikramaratna [1] in 1989 as a method for generating uniformly-distributed pseudo-random numbers. Subsequent theoretical analysis led to a demonstration (Wikramaratna [2]) some 20 years later that the ACORN random number generator is a special case of a matrix generator, implemented in modular arithmetic modulo M, for which the matrix turns out to be centro-invertible. ...

... There is one particular example of a real application in which centro-invertible matrices occur naturally; this arises in the analysis of the ACORN algorithm (which is used for generating pseudo-random numbers that are uniformly distributed in the unit interval). The kth order Additive Congruential Random Number (ACORN) generator is defined by Wikramaratna [1,5] from an integer modulus M (which should be a large integer, typically equal to 2 i for some integer i; values of the form p i where p is any prime number can also be considered), an integer seed Y 0 ...

This paper defines a new type of matrix (which will be called a centro-invertible matrix) with the property that the inverse can be found by simply rotating all the elements of the matrix through 180 degrees about the mid-point of the matrix. Centro-invertible matrices have been demonstrated in a previous paper to arise in the analysis of a particular algorithm used for the generation of uniformly-distributed pseudo-random numbers.An involutory matrix is one for which the square of the matrix is equal to the identity. It is shown that there is a one-to-one correspondence between the centro-invertible matrices and the involutory matrices. When working in modular arithmetic this result allows all possible k by k centro-invertible matrices with integer entries modulo M to be enumerated by drawing on existing theoretical results for involutory matrices.Consider the k by k matrices over the integers modulo M. If M takes any specified finite integer value greater than or equal to two then there are only a finite number of such matrices and it is valid to consider the likelihood of such a matrix arising by chance. It is possible to derive both exact expressions and order-of-magnitude estimates for the number of k by k centro-invertible matrices that exist over the integers modulo M. It is shown that order (√N) of the N=M(k2) different k by k matrices modulo M are centro-invertible, so that the proportion of these matrices that are centro-invertible is order (1/√N).

... The main classes are congruential and recursive generators. Common congruential generators include linear, quadratic, inversive, additive and parallel linear congruential generators [14,45]. Recursive generators include multiplicative recursive, lagged Fibonacci, multiply-with-carry-generator, add-with-carry and substract-with-borrow generators [14]. ...

... The seed y 0 is a large prescribed integer. An additive congruential generator (of kth order) [45], to be described below, requires k + 1 seeds 0 ≤ y 0 j < M, j = 0, . . . , k, and the parallel linear generator includes three separate linear generators denoted by y i , y i andŷ i . ...

Genetic algorithms are commonly used metaheuristics for global optimization, but there has been very little research done
on the generation of their initial population. In this paper, we look for an answer to the question whether the initial population
plays a role in the performance of genetic algorithms and if so, how it should be generated. We show with a simple example
that initial populations may have an effect on the best objective function value found for several generations. Traditionally,
initial populations are generated using pseudo random numbers, but there are many alternative ways. We study the properties
of different point generators using four main criteria: the uniform coverage and the genetic diversity of the points as well
as the speed and the usability of the generator. We use the point generators to generate initial populations for a genetic
algorithm and study what effects the uniform coverage and the genetic diversity have on the convergence and on the final objective
function values. For our tests, we have selected one pseudo and one quasi random sequence generator and two spatial point
processes: simple sequential inhibition process and nonaligned systematic sampling. In numerical experiments, we solve a set
of 52 continuous test functions from 16 different function families, and analyze and discuss the results.

... Additive Congruential Random Number (ACORN) generator, introduced by R.S. Wikramaratna [35], was originally designed for use in geostatistical and geophysical Monte Carlo simulations, and later extended for use on parallel computers. [36] We define [36] the kth order ACORN generator X k j recursively from a seed X 0 0 (where 0 < X 0 0 < M and M = 1, 2, . . . ) and a set of ...

Most random numbers used in computer programs are pseudorandom, which means they are generated in a predictable fashion using a mathematical formula. This is acceptable for many purposes, sometimes even desirable. In this paper we will take a look at few popular generators producing pseudorandom integers from continuous uniform distribution. Then we will use such generator to try to implement a generator producing numbers from interval ]0, 1[. And then, on its basis, generators of numbers from Bernoulli, binomial, Poisson, exponential and normal distributions.

... The two main optimization parameters are the number of random restarts to perform, and the number of locations to randomly reset during each restart. The random number seed initializes an acorni, (Wikramaratna, 1989), random number generator that controls the random paths and random restart locations. ...

The mining industry has become increasingly concerned with the effects of uncertainty and risk in resource modeling. Some companies are moving away from deterministic geologic modeling techniques to approaches that quantify uncertainty. Stochastic modeling techniques produce multiple realizations of the geologic model to quantify uncertainty, but integrating these results into pit optimization is non-trivial.
Conventional pit optimization calculates optimal pit limits from a block model of economic values and precedence rules for pit slopes. There are well established algorithms for this including Lerchs-Grossmann, push-relabel and pseudo-flow; however, these conventional optimizers have limited options for handling stochastic block models. The conventional optimizers could be modified to incorporate a block-by-block penalty based on uncertainty, but not uncertainty in the resource within the entire pit.
There is a need for a new pit limit optimizing algorithm that would consider multiple block model realizations. To address risk management principles in the pit shell optimization stage, a novel approach is presented for optimizing pit shells over all realizations. The inclusion of multiple realizations provides access to summary statistics across the realizations such as the risk or uncertainty in the pit value. This permits an active risk management approach.
A heuristic pit optimization algorithm is proposed to target the joint uncertainty between multiple input models. A practical framework is presented for actively managing the risk by adapting Harry Markowitz’s “Efficient Frontier” approach to pit shell optimization. Choosing the acceptable level of risk along the frontier can be subjective. A risk-rating modification is proposed to minimize some of the subjectivity in choosing the acceptable level of risk. The practical application of the framework using the heuristic pit optimization algorithm is demonstrated through multiple case studies.

... The second is a local definition ofan array, denoted ixv, which is stored in a common block used by the GSLIB utility routine acorni. This utility calculates pseudo-random numbers using a seed as an input and storing it in the first element of the array ixv at the beginning of the execution (Wikramaratna, 1989). In the single-thread version, these optimizations deliver a speedup of 1.53 Â /1.32 Â /1.38 Â , matching the numerical results of the original sgsim application. ...

The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

... We present detailed simulation results for one particular value of the seed, but simulations with a different initial seed gave very similar results, as quantified by the error indicators on the diagnostics in Table III, as did simulations with the Mersenne Twister replaced by the ACORN random number generator. 72 In both cases we executed the generator on the host microprocessor, and copied the resulting sequence of random numbers to the GPU. A simple linear congruential generator running directly on the GPU produced noticeably different results from these two much more sophisticated generators. ...

Conventional shallow water theory successfully reproduces many key features of the Jovian atmosphere: a mixture of coherent vortices and stable, large-scale, zonal jets whose amplitude decreases with distance from the equator. However, both freely decaying and forced-dissipative simulations of the shallow water equations in Jovian parameter regimes invariably yield retrograde equatorial jets, while Jupiter itself has a strong prograde equatorial jet. Simulations by Scott and Polvani ["Equatorial superrotation in shallow atmospheres," Geophys. Res. Lett. 35, L24202 (2008)] have produced prograde equatorial jets through the addition of a model for radiative relaxation in the shallow water height equation. However, their model does not conserve mass or momentum in the active layer, and produces mid-latitude jets much weaker than the equatorial jet. We present the thermal shallow water equations as an alternative model for Jovian atmospheres. These equations permit horizontal variations in the thermodynamic properties of the fluid within the active layer. We incorporate a radiative relaxation term in the separate temperature equation, leaving the mass and momentum conservation equations untouched. Simulations of this model in the Jovian regime yield a strong prograde equatorial jet, and larger amplitude mid-latitude jets than the Scott and Polvani model. For both models, the slope of the non-zonal energy spectra is consistent with the classic Kolmogorov scaling, and the slope of the zonal energy spectra is consistent with the much steeper spectrum observed for Jupiter. We also perform simulations of the thermal shallow water equations for Neptunian parameter values, with a radiative relaxation time scale calculated for the same 25 mbar pressure level we used for Jupiter. These Neptunian simulations reproduce the broad, retrograde equatorial jet and prograde mid-latitude jets seen in observations. The much longer radiative time scale for the colder planet Neptune explains the transition from a prograde to a retrograde equatorial jet, while the broader jets are due to the deformation radius being a larger fraction of the planetary radius. C 2014 AIP Publishing LLC. [http://dx.doi.org/10.1063/1.4861123]

... Additive generators calculate each pseudo-random number as some additive combination of the previous numbers in the sequence [2]. R. S. Wikramaratna [3,4,5] proposed the acorn generator and its integer version acorni. The latter version is used in GSLIB [1]. ...

The use of acorni in scripts is common. Initializing the random number generator with seeds that are incremented by a constant value facilitates the generation of multiple realiza- tions. However, if the seed numbers are shifted by a constant, there is a constant difference between the random numbers generated. This can introduce artifact correlation in the re- sults of multiple variables. Some examples are provided in this note and a simple solution is suggested to avoid this problem.

... Pseudorandom numbers are deterministic, but they try to imitate an independent sequence of genuine random numbers. Common pseudorandom number generators include, among others, linear congruential, quadratic congruential, inversive congruential, parallel linear congruential, additive congruential, lagged Fibonacci, and feedback shift register generators (see, for example, [6,13,14]). In addition, there exist numerous modifications and combinations of the basic generators [13]. ...

The selection of the initial population in a population-based heuristic optimizationmethod is important, since it affects the search for several iterations and often has an influence on the final solution. If no a priori information about the optima is available, the initial population is often selected randomly using pseudorandom numbers. Usually, however, it is more important that the points are as evenly distributed as possible than that they imitate random points. In this paper, we study the use of quasi-random sequences in the initial population of a genetic algorithm. Sample points in a quasi-random sequence are designed to have good distribution properties. Here a modified genetic algorithm using quasi-random sequences in the initial population is tested by solving a large number of continuous benchmark problems from the literature. The numerical results of two implementations of genetic algorithms using different quasi-random sequences are compared to those of a traditional implementation using pseudorandom numbers. The results obtained are promising.

... "unif01.h" unif01_Gen * uvaria_CreateACORN (int k, double S[]);Initializes a generator ACORN (Additive COngruential Random Number)[173] of order k and whose initial state is given by the vector S[0..(k-1)]. ...

This document describes the software library TestU01, implemented in the ANSI C language, and offering a collection of utilities for the (empirical) statistical testing of uniform random number generators (RNG). The library implements several types of generators in generic form, as well as many specific generators proposed in the literature or found in widely-used software. It provides general implementations of the classical statistical tests for random number generators, as well as several others proposed in the literature, and some original ones. These tests can be applied to the generators predefined in the library and to user-defined generators. Specific tests suites for either sequences of uniform random numbers in [0, 1] or bit sequences are also available. Basic tools for plotting vectors of points produced by generators are provided as well. Additional software permits one to perform systematic studies of the interaction between a specific test and the structure of the point sets produced by a given family of RNGs. That is, for a given kind of test and a given class of RNGs, to determine how large should be the sample size of the test, as a function of the generator’s period length, before the generator starts to fail the test systematically.

... For the speciÿc parameters used, equals −1.Table 1 summarizes the data sets that have been simulated and the estimators that have been used for each case. Random number were generated using the ACORN random number generator (Wikamaratna, 1989). These distributions have also been studied in Beirlant et al. (1996a), Beirlant et al. (1996b) and Caers et al. (1998.Table 1 also shows for each case the optimal number of data k opt that should be retained for the tail estimation and the MSE, the bias and the variance of the estimator for that value of k. ...

Extreme value theory has led to the development of various statistical methods for nonparametric estimation of distribution tails. A common problem in all of these estimators is the choice of the number of extreme data that should be used in the estimation and the construction of confidence intervals on the estimator. In this paper, we outline a method that uses the nonparametric bootstrap for both problems. The bootstrap is twofold: (1) the first bootstrap is used to estimate the optimal number of extremes – in the mean square error sense – to be used for the tail index estimation as has been earlier suggested by Hall (1990, J. Multivariate Anal. 32 (1990) 177–203), and (2) the second bootstrap is used to obtain confidence intervals. The method has been applied to data generated by Monte Carlo simulation for a variety of distributions and on this basis the performance of the method will be assessed.

... A random path is de® ned for visiting sequentially, once and only once, each DEM node. Each node is ® rst assigned an index ranging from 1 to N , and then a node is considered (visited) at random by drawing a random number uniformly distributed in [1, N] (Wikramaratna 1989). 2. At any simulation node u i along the random path: ...

A geostatistical methodology is proposed for integrating elevation estimates derived from digital elevation models (DEMs) and elevation measurements of higher accuracy, e.g., elevation spot heights. The sparse elevation measurements (hard data) and the abundant DEM-reported elevations (soft data) are employed for modeling the unknown higher accuracy (reference) elevation surface in a way that properly reflects the relative reliability of the two sources of information. Stochastic conditional simulation is performed for generating alternative, equiprobable images (numerical models) of the unknown reference elevation surface using both hard and soft data. These numerical models reproduce the hard elevation data at their measurement locations, and a set of auto and crosscovariance models quantifying spatial correlation between data of the two sources of information at various spatial scales. From this set of alternative representations of the reference elevation, the probability that the unknown reference value is greater than that reported at each node in the DEM is determined. Joint uncertainty associated with spatial features observed in the DEM, e.g. the probability for an entire ridge existing, is also modeled from this set of alternative images. A case study illustrating the proposed conflation procedure is presented for a portion of a USGS one-degree DEM. It is suggested that maps of local probabilities for over or underestimation of the unknown reference elevation values from those reported in the DEM, and joint probability values attached to different spatial features, be provided to DEM users in addition to traditionally reported summary statistics used to quantify DEM accuracy. Such a metadata element would be a valuable tool for subsequent decision-making processes that are based on the DEM elevation surface, or for targeting areas where more accurate elevation measurements are required.

Genetic algorithms are suitable for optimization problems that possess multiple local optima. They can provide excellent approximative solutions to a wide range of problems that either takes an infite timeto solve through an exhaustive search or takes requires a huge amount of computational time. In this article, we are going to describe the implementation of genetic algorithms on the multi-dimensional knapsack problem (MKP), which is an optimazation problem NP-hard. Most search papers had dealt only with the simplest version of knapsack problem which is the 0/1 knapsack problem. They provided algorithms that can find near-optimal solutions. However, there is not enough studies that duilt with the version of he multi-dimensional knapsack problem (MKP) with multiple constraints, which is a strongly NP-hard combinatorial optimization problem occurring in many different applications. The goal of this article is to present an to approach to solve the multi-dimensional knapsack problem (MKP) with a sophisticated genetic algorithm.

Conventional geostatistical simulation methods are implemented in a way that is inherently gridded and sequence dependent. A variant of spectral simulation method is revisited from a linear model of regionalization standpoint to simulate realizations of coregionalized variables that are expressed as a function of the coordinates of the simulation locations, data values, and imposed spatial structure. The resulting grid-free simulation (GFS) methodology expresses a realization at any set of regularly or irregularly distributed nodes. GFS consists of two main steps: unconditional grid-free simulation and dual cokriging-based conditioning. The unconditional multivariate simulation is represented by a linear model of coregionalization, where weights are derived from the covariance function of the modeled system, and random factors are computed as a sum of equally weighted line processes within a turning bands paradigm. These stochastic line processes are expressed as a linear model of regionalization, weights are from the Fourier series decomposition of line covariance functions, and random factors have a cosine function form requiring coordinates of simulation locations and the random phases. The resulting conditionally simulated values are uniquely tied to simulation locations by an analytical form. Newly assimilated data change current realizations only locally within the correlation range. The GFS parameters are carefully chosen from a series of examples, and the associated theory is illustrated with a three-dimensional case study.

The selection of initial points in a population-based heuristic optimization method is important since it affects the search for several iterations and often has an influence on the final solution. If no a priori information about the optimization problem is available, the initial population is often selected randomly using pseudo random numbers. Many times, however, it is more important that the points are as evenly distributed as possible than that they imitate random points. Therefore, we have studied the use of quasi random sequences in the initialization of a genetic algorithm. Sample points in a quasi random sequence are designed to have very good distribution properties. The modified genetic algorithms using quasi random sequences in the initial population have been tested by solving a large number of continuous benchmark problems from the literature. The numerical results of three genetic algorithm implementations using different quasi random sequences have been compared to those of a traditional implementation of using pseudo random numbers. The results are promising.

Analyses within the field of GIS are increasingly applying stochastic methods and systems that make use of pseudo-random number generators (PRNGs). Examples include Monte Carlo techniques, dynamic modelling, stochastic simulation, artificial life and simulated data development. PRNGs have inherent biases, and this will in turn bias any analyses using them. Therefore, the validity of stochastic analyses is reliant on the PRNG employed. Despite this, the effect of PRNGs in spatial analyses has never been completely explored, particularly a comparison of different PRNGs. Exacerbating the problem is that GIS articles applying Monte Carlo or other stochastic methods rarely report which PRNG is employed. It thus appears likely that GIS researchers rarely, if ever, check the suitability of the PRNG employed for their analyses or simulations. This paper presents a discussion of some of the characteristics of PRNGs and specific issues from a geospatial standpoint, including a demonstration of the differences in the results of a Monte Carlo analysis obtained using two different PRNGs. It then makes recommendations for the application of PRNGs in spatial analyses, including recommending specific PRNGs that have attributes appropriate for geospatial analysis. The paper concludes with a call for more research into the application of PRNGs to spatial analyses to fully understand the impact of biases, especially before they are routinely used in the wider spatial analysis community.

Sequential Gaussian simulation is a widely used algorithm for the stochastic characterization of properties from various earth science disciplines. Many variants have been developed to deal with the increasing complexity of modeling applications. The program described in this paper is a flexible, tested, and documented implementation. Multiple variables can be cosimulated within different rock types simultaneously. The stepwise transform is integrated into the program as are collocated cokriging, collocated cokriging with the intrinsic model, and cokriging with a linear model of coregionalization for the cosimulation of multiple variables. Multiple secondary data can be incorporated using locally varying means, collocated cokriging, and Bayesian updating. The search options and other parameters are flexible within rock types. Fortran source code and a compiled executable are provided.

Monte-Carlo simulations are common and inherently well suited to parallel processing, thus requiring random numbers that are also generated in parallel. We describe here a splitting approach for parallel random number generation. Various definitions of the Monte-Carlo method have been given. As one example (1): "The Monte-Carlo method is defined as representing the solution of a problem as a parameter of a hypothetical population, and using a random sequence of numbers to construct a sample of the population from which statistical estimates of the parameters can be obtained." The method has been defined more broadly (2) as "any technique making use of random numbers to solve a problem." A more encompassing description (6) is "a numerical method based on random sampling." In this article we take the definition in its most general sense, and it is the random aspect of Mon te Carlo that is our focus. Monte-Carlo simulation consists of repeating the same basic calculation a large number of times with different input data and then performing some statistical analysis on the set of results. Input data for the different "trials'' are selected using values in prescribed distributions, using a pseudo-random number generator. The basic computation typically involves a significant amount of calculation, so that the pseudo-random number generation itself represents a small fraction of the total computational effort. In theory, the accuracy of the method improves with the number of trials. In practice, this improvement is dependent on the quality of the pseudo-random number generator used. The obvious approach to parallelization—which has led to the description of Monte-Carlo calculations as "inher- ently parallel" or "naturally parallel"—involves simply assigning each trial to any available processor. Provided that the number of trials is large compared with the number of processors available, this approach can lead to efficient parallel computation. This is so even when the computational effort varies from one trial to another. When computational speeds also vary significantly among processors, a slightly more sophisticated approach to scheduling may be desirable. However, the basic approach remains the same. The success of any Monte-Carlo application depends crucially on the quality of the pseudo-random number generators used. To achieve the theoretical convergence rates associated with the method, the pseudo-random number generators must have certain properties. Both the quality of the generators and the statistical independence of the results calculated on each processor are important. A recent article in SIAM News (5) distinguished between two alternative approaches to the parallelization of Monte-Carlo applications. With the first approach, parameterization, a family of random number generators is defined by a recursion contain ing a parameter that can be varied. Each valid value of the parameter leads to a recursion that produces a unique, full-period stre am of pseudo-random numbers that can be used on a particular processor. By using a different parameter value on each of P identical processors, it is possible to undertake Monte-Carlo-type calculations in parallel, with a speedup approaching P. The benefits of paral-lelization can be realized only when the results calculated for each processor are statistically independent, i.e., when the streams of random numbers generated on each processor are independent. As the number of processors increases, additional valid parameters are needed; even if consideration is limited to pairwise independence of the random number generators, it is necessary to consider P(P + 1 )/2 such pairs of generators for an implementation on P processors. Three different methods for parallel random number generation were considered in (5): linear congruential generators, shift-register generators, and lagged-Fibonacci generators. All of these generators are amenable to va rious forms of parameterization. The second approach, which was not considered in (5), is splitting. In splitting, the output from a single random number genera tor with a long period is split into a number of substreams. The substreams are then used either on different processors or for dif ferent trials in the Monte-Carlo calculation. For the splitting approach to be feasible, the underlying generator must satisfy several conditions:

The iteration of Chebyshev polynomials generates mixing transformations that model canonical features of chaotic systems. These include pseudo- random evolution, ergodicity, fading memory, and the irreversible dispersal of any set of positive measure throughout the mixing region. Mixing processes are also analytically and numerically unstable. Nevertheless, iterative interpolation, or numerical retrodiction, demonstrates that the computer generated trajectories are shadowed within strict error bounds by exact Chebyshev iterates. Pervasive shadowing is, however, not sufficient to ensure a generic correspondence between computer simulations and “true dynamics”. This latitude is illustrated by several basic distinctions between the computer generated orbit structures and the exact analytic orbits of the Chebyshev mixing transformations.

This paper considers an approach to generating uniformly distributed pseudo-random numbers which works well in serial applications but which also appears particularly well-suited for application on parallel processing systems. Additive Congruential Random Number (ACORN) generators are straightforward to implement for arbitrarily large order and modulus; if implemented using integer arithmetic, it becomes possible to generate identical sequences on any machine.Previously published theoretical analysis has demonstrated that a kth order ACORN sequence approximates to being uniformly distributed in up to k dimensions, for any given k. ACORN generators can be constructed to give period lengths exceeding any given number (for example, with period length in excess of 230p, for any given p). Results of empirical tests have demonstrated that, if p is greater than or equal to 2, then the ACORN generator can be used successfully for generating double precision uniform random variates.This paper demonstrates that an ACORN generator is a particular case of a multiple recursive generator (and, therefore, also a special case of a matrix generator). Both these latter approaches have been widely studied, and it is to be hoped that the results given in the present paper will lead to greater confidence in using the ACORN generators.

In the mind of the average computer user, the problem of generating uniform variates by computer has been solved long ago. After all, every computer :system offers one or more function(s) to do so. Many software products, like compilers, spreadsheets, statistical or numerical packages, etc. also offer their own. These functions supposedly return numbers that could be used, for all practical purposes, as if they were the values taken by independent random variables, with a uniform distribution between 0 and 1. Many people use them with faith and feel happy with the results. So, why bother?
Other (less naive) people do not feel happy with the results and with good reasons. Despite renewed crusades, blatantly bad generators still abound, especially on microcomputers [55, 69, 85, 90, 100]. Other generators widely used on medium-sized computers are perhaps not so spectacularly bad, but still fail some theoretical and/or empirical statistical tests, and/or generate easily detectable regular patterns [56, 65].
Fortunately, many applications appear quite robust to these defects. But with the rapid increase in desktop computing power, increasingly sophisticated simulation studies are being performed that require more and more “random” numbers and whose results are more sensitive to the quality of the underlying generator [28, 40, 65, 90]. Sometimes, using a not-so-good generator can give totally misleading results. Perhaps this happens rarely, but can be disastrous in some cases. For that reason, researchers are still actively investigating ways of building generators. The main goal is to design more robust generators without having to pay too much in terms of portability, flexibility, and efficiency. In the following sections, we give a quick overview of the ongoing research. We focus mainly on efficient and recently proposed techniques for generating uniform pseudorandom numbers. Stochastic simulations typically transform such numbers to generate variates according to more complex distributions [13, 25]. Here, “uniform pseudorandom” means that the numbers behave from the outside as if they were the values of i.i.d. random variables, uniformly distributed over some finite set of symbols. This set of symbols is often a set of integers of the form {0, . . . , m - 1} and the symbols are usually transformed by some function into values between 0 and 1, to approximate the U(0, 1) distribution. Other tutorial-like references on uniform variate generation include [13, 23, 52, 54, 65, 84, 89].

Although the multiplicative congruential method for generating pseudo-random numbers is widely used and has passed a number of tests of randomness [1, 2], attempts have been made to find an additive congruential method since it could be expected to be faster. Tests on a Fibonacci sequence [1] have shown it to be unsatisfactory. The sequence x i +1 = (2 a + 1) x i + c (mod 2 ³⁵ ) (1) has been tested on the IBM 704. In appendix I it is shown that the sequence generates the full period of 2 ³⁵ numbers for a ≧ 2 and c odd. Similar results obtain for decimal machines. Since multiplication by a power of the base can be accomplished by shifting, which is comparable in speed to addition, this scheme requires essentially three additions. It takes 14 machine cycles on the IBM 704, compared to 28 for the multiplicative method, so that the saving is 168 μs/random number. The scheme has the further advantage that it does not destroy the multiplier-quotient register.
Some tests have been made on the randomness of this sequence for a = 7 and c = 1, and a summary of the results is given in appendix II, where now the random numbers are considered to lie in the interval (0, 1).
The serial correlation coefficient between one member of this sequence and the next is shown by Coveyou [3] to be approximately 0.8 per cent. By taking a = 9 this correlation coefficient can be reduced to approximately 0.2 per cent without increasing the time. Taking a = 21 would make this correlation very small but would require one more machine cycle on the IBM 704. Another way to reduce the correlation is to choose c such that the numerator in Coveyou's expression for the correlation coefficient is zero. This cannot be done exactly since it requires that c = (.5 ± √3/6)2 P where P is the number of binary digits (excluding sign) in a machine word. However, a machine representation close to either of these numbers should be satisfactory. Some correlations with c = (.788 ⁺ )2 ³⁵ and a = 7 were obtained and did not differ significantly from those given for c = 1 in the first section of appendix II.
The author wishes to thank R. R. Coveyou for communicating his results in advance of publication and Elizabeth Wetherell for carrying out the calculations.

The iteration of the Čebyšev polynomial x2 − 2 generates a mixing transformation on the interval xε [−2, 2]. Extensive computer experiments have demonstrated that this is a convenient method for generating sequences of pseudo-random numbers. Despite the eventual domination of cumulative roundoff errors the asymptotic statistical features of the mixing are preserved. Multiple sequences of stochastically independent variables may be generated by these techniques. In practical computations the Čebyšev mixing eventually terminates in long pseudo-ergodic cycles. These results are linked with the general problem of simulating the stochastic behavior of physical systems by means of functional iteration.Contents. 1. Introduction. 2. Randomness Criteria. 3. Computer Simulation of Čebyšev Mixing—3.1. Single Sequences of Pseudo-random Numbers; 3.2. Computer Simulations: Growth of Round-off Errors; 3.3. Computer Simulations: Free Running and Terminal Cycles; 3.4. Multiple Sequences of Pseudo-random Numbers; 4. Čebyšev Mixing Theorems; Product Transformations; Probabilistic Metrics—Asymptotic Dispersion; Kolmogorov Entropy. 5. Variable Precision Simulations. 6. Terminal Cycles; Simulation of Pre-image Chains; Memory-Dependent Feedback. 7. Other Simulations of Random Processes; Probabilistic Metric for Baker's Transformation; Brolin's Theorem; Šarkovskii's Theorem; Hamiltonian Mechanics; Flow Equation; Non-embeddable Functions.

Nonrandom features of pseudorandom number generators are usually regarded as defects which may be minimized by improving the algorithms or resorting to larger computers. There are, however, certain elements of order which cannot be avoided even on digital devices of arbitrarily large capacity. For instance, on an N-state machine, pseudorandom number generators will terminate on fixed points or fall into loops after approximately √N steps. Combinatorial arguments then can be used to show that for any given algorithm and any finite device it is highly improbable that there are more than three or four distinct terminal loops. All pseudorandom sequences merging into these loops can be traced backwards to their initial numbers; and the resulting pattern of ”ancestor numbers“ can be charted in detail for any computer, even for noninvertible algorithms. The conflicting requirements of randomness and finite numerical precision lead to an ordered distribution of the set of initial numbers. In this sense neither the initial nor the final states of a simulation of chaotic behavior can ever be random. The ”few loop“ constraint could generate patterns of self-organization in nonequilibrium systems. Experimental evidence from the hysteresis of Ewing arrays supports this conjecture.

- Tausworthe