## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

An algorithm and coding technique is presented for quick evaluation of the Lehmer pseudo-random number generator modulo 2 ** 31 - 1, a prime Mersenne number which produces 2 ** 31 - 2 numbers, on a p-bit (greater than 31) computer. The computation method is extendible to limited problems in modular arithmetic. Prime factorization for 2 ** 61 - 2 and a primitive root for 2 ** 61 - 1, the next largest prime Mersenne number, are given for possible construction of a pseudo-random number generator of increased cycle length.

To read the full-text of this research,

you can request a copy directly from the authors.

... We will assume α = 32. Then, it uses a PRNG technique such as defined in [26] to build N coefficients c 1 , . . . , c N . ...

... The server will send through each RSU a random sequence of such blocks. Note that by using a simple PRNG such as a Lehmer [26], there is no need of any kind of synchronisation between the server and the receivers. ...

This paper introduces a new approach for infrastructure based content distribution in a vehicular network. It is built on broadcasting and pseudo random network coding. Its main strength is that, being broadcast based, it does not need any feedback channel and thus uses less data rate. Data is transmitted exploiting network coding, multiple linear combinations of data are sent. A vehicle needs to receive a defined number of independent linear combinations to decode the data. The server will send a larger number of different linear combinations. The unreliability of broadcast is thus neutralized through a useful redundancy rather than through re-transmission. Finally, computation of the linear combination coefficients is done so that the overhead is the same as it would be without network coding. Depending on the infrastructure deployed, this technique can be a content distribution solution per se or the first step in a more general solution, the second step being based on collaborative download. The high diversity in transmissions will then be a key feature for the performance of such an application.

... We will assume α = 32. Then, it uses a PRNG technique such as defined in [26] to build N coefficients c 1 , . . . , c N . ...

... The server will send through each RSU a random sequence of such blocks. Note that by using a simple PRNG such as a Lehmer [26], there is no need of any kind of synchronisation between the server and the receivers. ...

This paper introduces a new approach for infrastructure based content distribution in a vehicular network. It is built on broadcasting and pseudo random network coding. Its main strength is that, being broadcast based, it does not need any feedback channel and thus uses less data rate. Data is transmitted exploiting network coding, multiple linear combinations of data are sent. A vehicle needs to receive a defined number of independent linear combinations to decode the data. The server will send a larger number of different linear combinations. The unreliability of broadcast is thus neutralized through a useful redundancy rather than through re-transmission. Finally, computation of the linear combination coefficients is done so that the overhead is the same as it would be without network coding. Depending on the infrastructure deployed, this technique can be a content distribution solution per se or the first step in a more general solution, the second step being based on collaborative download. The high diversity in transmissions will then be a key feature for the performance of such an application.

... It would be better if this kind of generator provides a longer cycle length to meet the current needs of large-scale simulations. Payne et al. [1969] predicted that, due to increases in computer speed and the next Mersenne prime of 2 31 Ϫ 1 being 2 61 Ϫ 1, multiplicative congruential generators with modulus 2 61 Ϫ 1 would be needed. The time has now arrived. ...

... The division operation (mod m) in multiplicative congruential generators with (prime) modulus m ϭ 2 p Ϫ 1 can be performed by shifting and addition [Payne et al. 1969]. To further replace multiplication with shifting and addition, multiplier a must be in the form of simple expressions of 2 k . ...

The demand for random numbers in scientific applications is increasing. However, the most widely used multiplicative, congruential random-number generators with modulus 231 − 1 have a cycle length of about 2.1 × 109. Moreover, developing portable and efficient generators with a larger modulus such as 261 − 1 is more difficult than those with modulus 231 − 1. This article presents the development of multiplicative, congruential generators with modulus m = 2p − 1 and four forms of multipliers: 2k1 &minus 2k2,
2k1 + 2k2, m − 2k1 + 2k2, and m − 2k1 − 2k2, k1 > k2. The multipliers for modulus 231 − 1 and 261 − 1 are measured by spectral tests, and the best ones are presented. The generators with these multipliers are portable and vary fast. They have also passed several empirical tests, including the frequency test, the run test, and the maximum-of-t test.

... The generation of normally distributed random numbers is required for many numerical applications. Oppositely, most of pseudo-random number generators used for computations produce uniform distributed numbers via bit operations [107,108]. Gladly, there are methods to convert these uniformly distributed numbers into which-ever distribution we might be interested. Most modern programming languages offer these algorithms implicitly, but I thought it would be useful to describe one of these Box-Muller algorithm. ...

Understanding chromatin organization and its role in gene regulation is of major importance, however its underlying dynamics has been overseen up to recent years. Here I present results regarding dynamical properties of chromatin in diverse stages of the cell cycle and a possible connection between gene activity and local diffusion properties. I develop a new computational framework based on Gaussian processes and fractional Brownian motion called GP-FBM. This method infers apparent diffusion and anomalous coefficients more accurately than other popular methods and corrects for confound background movement. I further introduce a new biopolymer model using a mean-field approach in which Hi-C maps are used to model chromatin long-range interactions. Further, ChIP-seq data is used to calibrate local properties of the nuclear environment. This model was able to recapitulate experimental results for specific loci of the HoxA domain in mouse cells.

... • Mother RNG, available in Marsaglia's website (MOT, Marsaglia, 1994); • Multiple with carry RNG (MWC, Marsaglia, 1994); • Combo RNG (COM, Marsaglia, 1994); • Lehmer RNG (LEH, Payne et al., 1969); • Fractional Brownian motion (fBm) and fractional Gaussian noise (fGn); refer to Bardet et al. (2003); • Coloured noise with power spectrum f Àk with k ≥ 0 (Larrondo, 2012); • Linear congruential generator (LCG, Knuth, 1997). ...

This article serves two purposes. Firstly, it surveys the Bandt and Pompe methodology for the statistical community, stressing topics that are open for research. Secondly, it contributes towards a better understanding of the statistical properties of that approach for time series analysis. The Bandt and Pompe methodology consists of computing information theory descriptors from the histogram of ordinal patterns. Such descriptors lie in a 2D manifold: the entropy–complexity plane. This article provides the first proposal of a test in the entropy–complexity plane for the white noise hypothesis. Our test is based on true white noise sequences obtained from physical devices. The proposed methodology provides consistent results: It assesses sequences of true random samples as random (adequate test size), rejects correlated and contaminated sequences (sound test power) and captures the randomness of generators previously analysed in the literature.

... We assume that the assignment operator, all arithmetic operations, and all comparison operations take one time unit. Let the rnd function be calculated by the Lehmer pseudo-random number generator [33] using the following equation: ...

This paper presents a two-dimensional mathematical model of compound eye vision. Such a model is useful for solving navigation issues for autonomous mobile robots on the ground plane. The model is inspired by the insect compound eye that consists of ommatidia, which are tiny independent photoreception units, each of which combines a cornea, lens, and rhabdom. The model describes the planar binocular compound eye vision, focusing on measuring distance and azimuth to a circular feature with an arbitrary size. The model provides a necessary and sufficient condition for the visibility of a circular feature by each ommatidium. On this basis, an algorithm is built for generating a training data set to create two deep neural networks (DNN): the first detects the distance, and the second detects the azimuth to a circular feature. The hyperparameter tuning and the configurations of both networks are described. Experimental results showed that the proposed method could effectively and accurately detect the distance and azimuth to objects.
(c) Published in "Mathematics" Journal.

... We assume that the assignment operator, all arithmetic operations, and all comparison operations take one time unit. Let the rnd function be calculated by the Lehmer pseudo-random number generator [33] using the following equation: ...

This paper presents a two-dimensional mathematical model of compound eye vision. Such a model is useful for solving navigation issues for autonomous mobile robots on the ground plane. The model is inspired by the insect compound eye that consists of ommatidia, which are tiny independent photoreception units, each of which combines a cornea, lens, and rhabdom. The model describes the planar binocular compound eye vision, focusing on measuring distance and azimuth to a circular feature with an arbitrary size. The model provides a necessary and sufficient condition for the visibility of a circular feature by each ommatidium. On this basis, an algorithm is built for generating a training data set to create two deep neural networks (DNN): the first detects the distance, and the second detects the azimuth to a circular feature. The hyperparameter tuning and the configurations of both networks are described. Experimental results showed that the proposed method could effectively and accurately detect the distance and azimuth to objects.

... We will assume α = 32. It then uses a PRNC technique such as defined in [18] to build N coefficients c 1 , . . . , c N . ...

This paper introduces PRAVDA, a new approach for infrastructure based content distribution in a vehicular network. PRAVDA is built on broadcasting and pseudo random network coding. Its first strength is that, being broadcast based, it does not need any feedback channel and thus consumes very little throughput. Data is transmitted through network coding: multiple linear combinations of data are sent. A vehicle needs to receive a defined number of independent linear combinations to decode the data. The server will send a larger number of different linear combinations. The unreliability of broadcast is thus fought through a useful redundancy rather than through re-transmission. Finally, computation of the linear combination coefficients is done so that the overhead is the same as it would be without network coding.

... The above conditions can be satisfied if mod m is a full repetend prime or proper prime in base a [16]. A number m is said to be a full repetend prime, if the remainder of (2) repeats after a period of m -1. ...

Selected mapping (SLM) is one of the promising techniques used for peak-to-average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) system. One of the major drawbacks in this technique is that, the transmitter is forced to transmit more amount of side information (SI) bits in order to recover the original data at the receiver, which leads to data rate loss and inefficient transmission. In this paper, a new phase sequence generation method using Lehmer Random Number Generator (LRNG) called Lehmer sequence is proposed for SLM technique. Using the periodicity property of this sequence, the SI bits are embedded within the transmitted data block for 16-PSK modulation, which ensures that SI bits are not explicitly sent. The simulation results show that the proposed SLM (PSLM) provides a slight improvement in PAPR reduction without compromising the bit error rate (BER) for higher values of an expansion factor when compared to conventional SLM (CSLM).

... Ag and Au atoms in the alloy nanoparticles were generated randomly using the build-in (pseudo-) random number generator of LAMMPS. 46,48 The physical quantities of alloy NPs calculated, including potential energies and number of atoms ejected, were also averaged using five replicates with different randomly-generated structures. ...

Systematic controls of heat transfer in the surface-assisted laser desorption/ionization (SALDI) process and thus enhancing the analytical performance of SALDI-MS remains a challenging task. In the current study, by tuning...

... The random number generators (RNGs) are kept thread-private and are initiated with independent seeds, which are provided by a different type of RNG [e.g., 16 807 RNG (Ref. 30)] in our implementation. ...

Purpose:
The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo
dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field.

... In this way, the smart card successfully checks the identity and password of user U i . The smart card generates a random number r using a pseudorandom number generator function [43]. For example, r jþ1 ¼ ðar j þ bÞ mod n where a, b are user defined parameters; 1 a n À 1 and 0 b n À 1. r 0 is seed value which is also defined by user. ...

Authentication prevents any illegal access to system resources. An entity authentication scheme is a mechanism to solve the problem of authenticity in a wired or wireless network environment. A remote user authentication scheme proposed by Kim et al. (IEICE Trans Fundam Electron Commun Comput Sci 94(6):1426–1433, 2011) claims that this scheme is secure against the offline password guessing attack, unlimited online password guessing attack, server impersonation, user impersonation, and reply attacks. Tai et al. (2012 26th international conference on advanced information networking and applications workshops (WAINA), pp 160–164, 2012) report some fatal security flaws in the password change phase of the Kim et al.’s scheme. Though these two schemes have used the Rabin’s cryptosystem and claim their suitability for implementation, yet none of them describes the process of selecting one root out of four plaintexts from the single cipher text. In this paper, we use the Blum–Blum–Shub pseudo-random bit generator algorithm to select the original one among the four plaintexts. We also present the security analysis of our scheme. Our scheme is much secure and suitable for practical implementation.

... We will assume α = 32. It then uses a PRNC technique such as defined in [18] to build N coefficients c 1 , . . . , c N . ...

This paper introduces PRAVDA, a new approach for infrastructure based content distribution in a vehicular network. PRAVDA is built on broadcasting and pseudo random network coding. Its first strength is that, being broadcast based, it does not need any feedback channel and thus consumes very little throughput. Data is transmitted through network coding: multiple linear combinations of data are sent. A vehicle needs to receive a defined number of independent linear combinations to decode the data. The server will send a larger number of different linear combinations. The unreliability of broadcast is thus fought through a useful redundancy rather than through re-transmission. Finally, computation of the linear combination coefficients is done so that the overhead is the same as it would be without network coding.

... We suggest a uniform distribution algorithm such as multiplicative congruential algorithm [45,46], which is the basis for many of the random number generators in use today. Lehmer's generators [47] involve three integer parameters, r, s, and m, and an initial value, x0, called the seed. A sequence is generated by the following modified formula: ...

Wireless sensor network (WSN) consists of many hosts called sensors. These sensors can sense a phenomenon (motion, temperature, humidity, average, max, min, etc.) and represent what they sense in a form of data. There are many applications for WSNs including object tracking and monitoring where in most of the cases these objects need protection. In these applications, data privacy itself might not be as important as the privacy of source location. In addition to the source location privacy, sink location privacy should also be provided. Providing an efficient end-to-end privacy solution would be a challenging task to achieve due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood, and safety period. We extend this work to allow for countermeasures against multi-local and global adversaries. We present a network model protected against a sophisticated threat model: passive /active and local/multi-local/global attacks. This work provides a solution for end-to-end anonymity and location privacy as well. We will introduce a framework called fortified anonymous communication (FAC) protocol for WSN.

... The Mersenne twister [28], for example, developed in 1997, generates very high-quality pseudorandom numbers. Others exist such as the Lehmer random number generator [29], a variant of the linear congruential generator. ...

Variation in transistors is increasing as process technology transistor dimensions shrink. Compounded with lowering supply voltage, this increased variation presents new challenges for the circuit designer. However, this variation also brings many new opportunities for the circuit designer to leverage as well.
We present a time-to-digital converter embedded inside a 64-bit processor core, for direct monitoring of on-chip critical paths. This path monitoring allows the processor to monitor process variation and run-time variations. By adjusting to both static and dynamic operating conditions the impact of variations can be reduced. The time-to-digital converter achieves high-resolution measurement in the picosecond range, due to self-calibration via a self-feedback mode. This system is implemented in 45nm silicon and measured silicon results are shown. We also examine techniques for enhanced variation-tolerance in subthreshold digital circuits, applying these to a high fan-in, self-timed transition detection circuit that, due to its self-timing, is able to fully compensate for the large variation in subthreshold.
In addition to mitigating variations we also leverage them for random number generation. We demonstrate that the randomness inherent in the oxide breakdown process can be extracted and applied for the specific applications of on-chip ID generation and on-chip true random number generation. By using dynamic automated self-calibrating algorithms that tune and control the on-chip circuitry, we are able to achieve extremely high-quality results. The two systems are implemented in 65 nm silicon. Measured results for the on-chip ID system, called OxID, show a high-degree of randomness and read-stability in the generated IDs, both primary prerequisites of a high-quality on-chip ID system. Measured results for the true random number generator, called OxiGen, show an exceptionally high degree of randomness, passing all fifteen NIST 800-22 tests for randomness with statistical significance and without the aid of a post-processor.

... The random distribution we use is of uniform rate over the entire area between the intervals a, b. The main generators of pseudo-random numbers used today are called linear congruence generators introduced by Lehmer in 1951 [3][4][5]. A congruential method starts with an initial value (seed) x 0 , and successive values xn, n ≥ 1 are obtained recursively using the following formula: ...

Numerical results are presented from the behavior of diffraction gratings through micro-holographic spatially localized areas, which consist of micro-coded areas with sinusoidal profile gratings. The random distribution of the micro areas, introduces diffracted orders a random modulation, we observed a characteristic profile of randomness. This is a study of the behavior of the random distribution as a function of the micro-area form where the gratings are generated.

Randomness is an important issue for Internet of Things (IoT). The need to generate suitable random numbers for IoT devices with resource and size limitations has emerged due to the cryptographic protocols. Although random number generation approaches have been proposed considering IoT device constraints, commonly used software and hardware-based solutions have not been discussed in detail. The main contribution of this paper is the detailed examination of the problems encountered in random number generation in the IoT ecosystem and the proposed solution approaches. In this context, a classification has been proposed for hardware containing random number generator (RNG), which has different usage areas in IoT environments. Based on the presented classification, the characteristics of the devices in terms of resource constraints are examined. This classification serves as a guide for selecting suitable hardware in applications with or without random number needs. Also, basic RNGs and test suites are discussed. Some challenges are summarized by explaining the random numbers usage in the IoT environment. In addition, proposed random number generation scenarios for IoT devices are determined. Success rate analysis is carried out based on these techniques in terms of randomness tests. Software and hardware-based solution methods are detailed to meet the need for random numbers in real-world IoT applications. RNG algorithms actively used in IoT applications, basic working principles, usage areas, and hardware features are summarized. Finally, the problem that arises in generating random numbers in system-on-chip (SoC) systems, one of the proposed classification components, are summarized, and some precautions that can be taken are expressed.

The Metropolis algorithm is widely used in Monte Carlo (MC) simulations in diverse areas of science and technology, especially for problems formulated in terms of lattice models. A common situation is the necessity to perform long sequential processing, e.g., when looking for equilibrium states of distinct physical systems. Hence, even marginal increases in efficiency of the algorithm individual steps can lead to significant reductions in absolute execution runtimes. Usual speedup procedures include hardware updates, parallelization (when possible) and sampling methods. Here we follow a different direction in trying to decrease the full execution times of MC approaches: algorithmic optimization. We show that the algorithms can be improved by implementing relatively few and simple changes in their organization and structure. First, we discuss some refinements for the pseudo-random number generator, addressing the broadly employed Mersenne-Twister algorithm (MT19937-64). Second, we develop a protocol to precalculate the Boltzmann factor, thereby avoiding the high cost of repeatedly calls to this exponential function (indeed, a very recurring step in the standard Metropolis method). To benchmark our proposals we choose the Ising model since it is one of the best known and more extensively studied problems in statistical physics. We consider the mentioned optimizations and different computational elements (like compilers and Hamiltonian variables), testing the efficiency to obtain the system solutions. Our results suggest that the present set of improvement schemes—namely; decreasing the processing time for both, to generate a random number and to implement the one-flip Metropolis step; systematically enforcing optimization for the maximum number of algorithm structures accessing random numbers in a code; and considerably reducing the number of required computations of the MC probabilistic actualization term—might constitute a relevant addition to the existing collection of expediting techniques in MC computational routines.

In this chapter, we review several of the approaches for generating pseudorandom numbers (PRNs) on the unit interval. These are numbers that exhibit many of the properties of actual random numbers but are generated using deterministic algorithms. Also discussed are many of the desired features of PRN generators, such as uniformity, portability, large periods, and efficiency. In particular, we consider linear and nonlinear congruential generators, linear feedback shift register generators, and generators based on cellular automata. Some specific PRN generators such as Park and Miller's “minimal standard congruential generator”, the Wichmann–Hill generator, the L'Ecuyer generator, the Tausworthe bit-level generator, and the Mersenne Twister generator are presented. In addition, some of the many tests that prove useful in evaluating PRN generators are considered. A brief summary of PRN generator development from 1991 to 2020 is presented in order to stress the intense interest in and diversity among PRN generators. This chapter is provided for completeness and in order to stress the necessity to use “good” generators but it is not essential for users to understand all of the contents in order to perform useful MC calculations.

In recent years, the Green Internet of Things (G-IoT) has gained a lot of attention to developing energy-efficient communication systems. It consists of electronic devices and is integrated with numerous tight constraint sensors for observing the real world and provide communication services to end-users. However, optimal data collection and its management among the heterogeneous G-IoT objects are one of the main challenges. Many researchers are still proposing different solutions to cope with such problems and offering IoT-cloud paradigms for processing, storage, and scalability services However, the data of smart cities is forwarding among connected users using the open-source IoT platform, and sensitive information may be compromised. Therefore, this research aims to propose a model of security measures using the Green Internet of Things with Cloud Integrated Data Management (M-SMDM) for Smart Cities. Firstly, it forms a long-run and energy-efficient connectivity using self-balancing trees and distributing load factors uniformly in green communication systems. Secondly, it addresses the secret key distribution problem between peer nodes and attained trust for both partial and direct communication. In the end, it securing the transmission system from mobile gateways to cloud infrastructure against threats with improved data latency. The security analysis of the proposed M-SMDM model is done along with simulation-based experiments. The attained results disclose the importance of the proposed model in terms of network parameters compared to existing work.

Most random numbers used in computer programs are pseudorandom, which means they are generated in a predictable fashion using a mathematical formula. This is acceptable for many purposes, sometimes even desirable. In this paper we will take a look at few popular generators producing pseudorandom integers from continuous uniform distribution. Then we will use such generator to try to implement a generator producing numbers from interval ]0, 1[. And then, on its basis, generators of numbers from Bernoulli, binomial, Poisson, exponential and normal distributions.

The Arabic word for dice is not only the etymological origin of the French word for chance (hasard), but also directly the origin of probability calculus. The ideas for gambling strategies highlights an intuition for the notion of mathematical expectation, a quantity which, in a game of chance, is expressed as the product of the winnings and their probability of occurrence, summed over all possible winnings. Chance is neither objective, because it is simulated and therefore imperfect, nor effective, in the sense of biologists. It is productive in the sense that it is an irreplaceable computation or optimization tool. It is even prospective as it allows us to tackle problems which, either by their complexity or their dimensional altitude, are inaccessible to any other method known to date.

The Pseudo Random Numbers generators can be based on chaotic maps; they are still deterministic functions, so it is possible to predict future results. On the other hand, we have the real random number generators that comply with the mentioned characteristics, this is possible at the cost of high latency and slowness by the use of physical processes. In this article, the use of network traffic in the generation of random sequences is tested. An equation is used to improve the statistical properties of the method. It is verified that network traffic tends to be more chaotic in spaces with a larger number of users. Results show that the method generates very different sequences, but with unequal bit generation. We present a method for the generation of pseudo random numbers based on network traffic, minimizing the repetition of generated sequences.

Study of diffraction gratings through holographic cells, which correspond to micro circular zones, encoded with amplitude sinusoidal gratings. The random distribution of cells on the surface of hologram and its orientation of gratings per cell, produce in the diffracted orders a random distribution. We made a study of the behavior of the random modulation of diffracted orders, as a function of the orientation of code grating per cell.

Every Monte Carlo experiment relies on the availability of a procedure that supplies sequences of numbers from which arbitrarily selected nonoverlapping subsequences appear to behave like statistically independent sequences and where the variation in an arbitrarily chosen subsequence of length k (≥1) resembles that of a sample drawn from the uniform distribution on the k-dimensional unit hyper-cube \({\mathcal{I}^k}\). The words “appear to behave” and “resemble” alert the reader to yet another potential source of error that arises in Monte Carlo sampling. In practice, many procedures exist for generating these sequences. In addition to this error of approximation, the relative desirability of each depends on its computing time, on its ease of use, and on its portability By portability, we mean the ease of implementing a procedure or algorithm on a variety of computers, each with its own hardware peculiarities.

Chapter 8 reveals that every algorithm that generates a sequence of i.i.d. random samples from a probability distribution as output requires a sequence of i.i.d. random samples from u(0, 1) as input. To meet this need, every discrete-event simulation programming language provides a pseudorandom number generator that produces a sequence of nonnegative integers Z
1, Z
2,... with integer upper bound M > Z
i
∀i and then uses U
1, U
2,..., where U
i
:= Z
i
/M, as an approximation to an i.i.d. sequence from u(0, 1).

RFID is one of the most promising identification schemes in the field of ubiquitous computing. Non line of sight capability makes RFID systems more protuberant than its other alternative systems. RFID systems incorporate wireless medium, so there are some associated security threats and apprehensions to system from malicious adversaries. In order to make the system more reliable and secure, numerous researchers have proposed lightweight authentication protocols; which comprises of pseudorandom number generators in their designs. The presence of pseudorandom numbers not only introduces the randomness in the system but also enhance the diffusion properties of the protocols. In this paper, we have proposed a novel lightweight random number generator “RL-PRNG” for low cost pervasive systems. Performance analysis of proposed random number generator in terms of hardware with LFSR, LCG, AKARI-1 and AKARI-2 depicts that the proposed random number generator is much more lightweight in nature. Statistical properties of the pseudorandom number generator has been tested on well-established statistical tests; NIST, DIEHARD and ENT.

We present a theoretical study of amplitude diffraction gratings using computer simulating, which consists of a random sampling of points on the image grating to determine the points to be plotted and the points to remove, to simulate erosion in amplitude on the grating. We show their behavior in the diffraction patterns and the induced noise by limiting the number of points that representing the image of the eroded gratings and their symmetry.

Use of empirical studies based on computer-generated random numbers has become a common practice in the development of statistical methods, particularly when the analytical study of a statistical procedure becomes intractable. The quality of any simulation study depends heavily on the quality of the random number generators. Classical uniform random number generators have some major defects-such as the (relatively) short period length and the lack of higher-dimension uniformity. Two recent uniform pseudo-random number generators (MRG and MCG) are reviewed. They are compared with the classical generator LCG. It is shown that MRG/MCG are much better random number generators than the popular LCG. Special forms of MRG/MCG are introduced and recommended as the random number generators for the new century. A step-by-step procedure for constructing such random number generators is also provided.

This paper focuses on devising a general and efficient way of generating random numbers for the multiple recursive generator with both unrestricted multiplier and non-Mersenne prime modulus. We propose a new algorithm that embeds the technique of approximate factoring into the simulated division method. The proposed new algorithm improves the decomposition method in terms of both the suitability for various word-sizes of the computers and the efficiency characteristics, such as the number of arithmetic operations required and the computational time. Empirical simulations are conducted to compare and evaluate the computational time of this algorithm with the decomposition method for various computers.

Essentials of Monte Carlo Simulation focuses on the fundamentals of Monte Carlo methods using basic computer simulation techniques. The theories presented in this text deal with systems that are too complex to solve analytically. As a result, readers are given a system of interest and constructs using computer code, as well as algorithmic models to emulate how the system works internally. After the models are run several times, in a random sample way, the data for each output variable(s) of interest is analyzed by ordinary statistical methods. This book features 11 comprehensive chapters, and discusses such key topics as random number generators, multivariate random variates, and continuous random variates. Over 100 numerical examples are presented as part of the appendix to illustrate useful real world applications. The text also contains an easy to read presentation with minimal use of difficult mathematical concepts. Very little has been published in the area of computer Monte Carlo simulation methods, and this book will appeal to students and researchers in the fields of Mathematics and Statistics. © Springer Science+Business Media New York 2013. All rights are reserved.

Sensors require frequent over-the-air reprogramming to patch software errors, replace code, change sensor configuration, etc. Given their limited computational capability, one of the few workable techniques to secure code update in legacy sensors would be to execute Proofs of Secure Erasure (PoSE) which ensure that the sensor’s memory is purged before sending the updated code. By doing so, the updated code can be loaded onto the sensor with the assurance that no other malicious code is being stored. Although current PoSE proposals rely on relatively simple cryptographic constructs, they still result in considerable energy and time overhead in existing legacy sensors.
In this paper, we propose a secure code update protocol which considerably reduces the overhead of existing proposals. Our proposal naturally combines PoSE with All or Nothing Transforms (AONT); we analyze the security of our scheme and evaluate its performance by means of implementation on MicaZ motes. Our prototype implementation only consumes 371 bytes of RAM in TinyOS2, and improves the time and energy overhead of existing proposals based on PoSE by almost 75 %.

The integrity of computer simulation models is only as good as the reliability of the random number generator that produces the stream of random numbers one after the other. The chapter describes the difficult task of developing an algorithm to generate random numbers that are statistically valid and have a large cycle length. The linear congruent method is currently the common way to generate the random numbers for a computer. The parameters of this method include the multiplier and the seed. Only a few multipliers are statistically recommended, and two popular ones in use for 32-bit word length computers are presented. Another parameter is the seed and this allows the analyst the choice of altering the sequence of random numbers with each run, and also when necessary, offers the choice of using the same sequence of random numbers from one run to another.

Key-to-address conversion algorithms which have been used for a large, direct access file are compared with respect to record density and access time. Cumulative distribution functions are plotted to demonstrate the distribution of addresses generated by each method. The long-standing practice of counting address collisions is shown to be less valuable in fudging algorithm effectiveness than considering the maximum number of contiguously occupied file locations.

Considerable attention has recently been directed at developing simpler and faster algorithms for generating gamma random variates (with general, not necessarily integral, shape parameter α) on digital computers. This paper surveys the current state of the art, which includes fifteen gamma algorithms applicable for α≥1 and six that are applicable for α<1. These algorithms are compared according to the criteria of speed and simplicity. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on DEC and CDC computers are reported. Guidelines for some specific applications are given.

Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo. " The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle problem " provides a unifying theme as it is repeatedly used to illustrate many features of Monte Carlo methods. This book provides the basic detail necessary to learn how to apply Monte Carlo methods and thus should be useful as a text book for undergraduate or graduate courses in numerical methods. It is written so that interested readers with only an understanding of calculus and differential equations can learn Monte Carlo on their own. Coverage of topics such as variance reduction, pseudo-random number generation, Markov chain Monte Carlo, inverse Monte Carlo, and linear operator equations will make the book useful even to experienced Monte Carlo practitioners. Provides a concise treatment of generic Monte Carlo methods Proofs for each chapter Appendixes include Certain mathematical functions; Bose Einstein functions, Fermi Dirac functions, Watson functions.

One of the general methods for implementing a multiple recursive generator is recursive reduction method (RRM). This paper provides an analysis of its number of operations. We also propose a new method that modifies the RRM by considering the stop conditions and the order of multiplier and multiplicand. Some empirical comparisons reveal that the new algorithm is more efficient with a reduction of iterative numbers in the range of 13.1416%-20.5118% depending on the moduli and multipliers being used.

Linear congruential random number generators must have large moduli to attain maximum periods, but this creates integer overflow during calculations. Several methods have been suggested to remedy this problem while obtaining portability. Approximate factoring is the most common method in portable implementations, but there is no systematic technique for finding appropriate multipliers and an exhaustive search is prohibitively expensive. We offer a very efficient method for finding all portable multipliers of any given modulus value. Letting M=AB+CM=AB+C, the multiplier AA gives a portable result if B−CB−C is positive. If it is negative, the portable multiplier can be defined as A=⌊M/B⌋A=⌊M/B⌋. We also suggest a method for discovering the most fertile search region for spectral top-quality multipliers in a two-dimensional space. The method is extremely promising for best generator searches in very large moduli: 64-bit sizes and above. As an application to an important and challenging problem, we examined the prime modulus 263−25263−25, suitable for 64-bit register size, and determined 12 high quality portable generators successfully passing stringent spectral and empirical tests.

This paper considers a Gaussian first-order autoregressive process with unknown intercept where the initial value of the variable is a known constant. Monte Carlo simulations are used to investigate the sampling distribution of the t statistic for the autoregressive parameter when its value is in the neighborhood of unity. A small sigma asymptotic result is exploited in the construction of exact non-similar tests. The powers of non-similar tests of the random walk and other hypotheses are estimated for sample sizes typical in economic applications.

The present method generates machine-Independent uniform random sequences of real numbers in the interval (0.,1.) excluding 1. It uses a generalization of mulltiplicative linear gongruential generators working with prime numbers as moduli whose values have been fixed according to the positive integer arithmetic storage available from the system, and one or their corresponding primitive elements as multipliers to complete independently each full cycle.The periodicity can be considered as infinite: O (1092) for a 16-bit machine and O (10174) for a 32-bit machine and their respective integer arithmetic; the periodicity can be adjusted if it is required by the user in the normal version or statistically reaching the maximum in the enhanced 'stagger' version.An implementation of the method is available in the form of structured Fortran 77 functions and gives bettr results in term of velocity and periodicity than the other transportable functions compared with good quality of randomness.

Two simple and easily implemented algorithms are presented for obtaining random variables from the exponential power distribution with parameter α. Both algorithms are based on a generalization of Von Neumann's rejection technique. In the first algorithm, the first-stage sampling is from the double-exponential distribution, while the second algorithm uses the normal distribution. These two algorithms are applicable for all values of α, α ≥ 1 and α ≥ 2, respectively.

For three symmetric distributions and six sample sizes, this article presents estimates of the small-sample variance of Pitman's location-invariant and location-scale-invariant estimators of location. It then compares these two estimators and investigates the closeness of the Cramér-Rao bound when estimators are required to be invariant. Expressions of the form c/(n − d) prove quite effective in fitting variances of Pitman estimators, and d can be interpreted as the amount of Fisher information “lost.” In terms of relative efficiency, maximum-likelihood estimators and linear combinations of order statistics offer computationally attractive alternatives to the Pitman estimators.

The effects of different monetary rules on the rates of inflation and unemployment are studied by stochastic simulation of the Federal Reserve Board-MIT-Pennsylvania (FMP) Econometric Model and the St. Louis “Monetarist” Model. A number of heuristic and more formal statistical methods are used in evaluating the results. It is shown that simple feedback control rules—involving proportional and derivative controls—reduce the variability of the target variables relative to the rule in which the money supply is increased at a constant rate. The improvement is considerably greater in the St. Louis model than in the FMP model.

Die Konvergenzordnung eines Differenzenverfahrens ist von den Differenzierbarkeitseigenschaften der Lösung der Differentialgleichung abhängig. Bei parabolischen Anfangsrandwertaufgaben werden diese durch die Anfangsfunktion bestimmt. Es wird das Verhalten der Konvergenzordnung in Abhängigkeit von der Anfangsfunktion untersucht.

Pseudo-random number generators of the power residue (sometimes called congruential or multiplicative) type are discussed and results of statistical tests performed on specific examples of this type are presented. Tests were patterned after the methods of MacLaren and Marsaglia (M&M).
The main result presented is the discovery of several power residue generators which performed well in these tests. This is important because, of all the generators using standard methods (including power residue) that were tested by M&M, none gave satisfactory results.
The overall results here provide further evidence for their conclusion that the types of tests usually encountered in the literature do not provide an adequate index of the behavior of n-tuples of consecutively generated numbers. In any Monte Carlo or simulation problem where n supposedly independent random numbers are required at each step, this behavior is likely to be important.
Finally, since the tests presented here differ in certain details from those of M&M, some of their generators were retested as a check. A cross-check shows that results are compatible; in particular, if a generator failed one of their tests badly, it also failed the present author's corresponding test badly.

Random number generation on the BRL highspeed computing machines, by M

- LEHMER D.H