Article

Humans cannot consciously generate random numbers sequences: Polemic study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

It is widely believed, that randomness exists in Nature. In fact such an assumption underlies many scientific theories and is embedded in the foundations of quantum mechanics. Assuming that this hypothesis is valid one can use natural phenomena, like radioactive decay, to generate random numbers. Today, computers are capable of generating the so-called pseudorandom numbers. Such series of numbers are only seemingly random (bias in the randomness quality can be observed). Question whether people can produce random numbers, has been investigated by many scientists in the recent years. The paper "Humans can consciously generate random numbers sequences..." published recently in Medical Hypotheses made claims that were in many ways contrary to state of art; it also stated far-reaching hypotheses. So, we decided to repeat the experiments reported, with special care being taken of proper laboratory procedures. Here, we present the results and discuss possible implications in computer and other sciences.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Persaud's protocol, of asking people to call out numbers, is well-known, and has been tried many times before in various versions [3], including the test by Figurska et al. [5] who specifically set out to replicate Persaud's experiment, and found that the numbers generated by this process were nonrandom in several respects. As [5] comment, this attribute can be used to distinguish between man and machine, but only because humans are, in this regard, substantially inferior to machines -it is therefore rather easy for a machine to mimic a man in its incompetence as a RNG. ...
... Persaud's protocol, of asking people to call out numbers, is well-known, and has been tried many times before in various versions [3], including the test by Figurska et al. [5] who specifically set out to replicate Persaud's experiment, and found that the numbers generated by this process were nonrandom in several respects. As [5] comment, this attribute can be used to distinguish between man and machine, but only because humans are, in this regard, substantially inferior to machines -it is therefore rather easy for a machine to mimic a man in its incompetence as a RNG. ...
... Hypothesis (1) is the view of most researchers in the field (see, for example, [5,6]). The reason that Persaud's subjects could do this unusually well could be related to their environment -a waiting room where they had been waiting for some time for (unrelated) tasks in which the boredom of waiting focussed their mind on a task that otherwise would have commanded little mental attention. ...
Article
A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.
... Following this trend, we try to solve the problem of reliable random number generation in a crowd-like way. Many studies proved that human beings do not perform well at this task, as for instance, they tend to alternate odd numbers with even numbers, big numbers with small numbers and so on [17,47,53]. Whereas a single human being shows a high degree of determinism, they are part of social systems, which turn to be chaotic [4,19,22,45]. ...
... tive decays to generate random data at a rate of 100 bytes per second. Intel DRNG, 17 Araneus Alea 18 and Protego 19 generate random data at a high-rate from the thermal noise within the hardware. Intel DRNG exploits the silicon of the CPU, while Araneus Alea and Protego use external hardware to connect through USB or serial port. ...
Article
Full-text available
Random data generators play an important role in computer science and engineering since they aim at simulating reality in IT systems. Software random data generators cannot be reliable enough for critical applications due to their intrinsic determinism, while hardware random data generators are difficult to integrate within applications and are not always affordable in all circumstances. We present an approach that makes use of entropic data sources to compute the random data generation task. In particular, our approach exploits the chaotic phenomena happening in the crowd. We extract these phenomena from social networks since they reflect the behavior of the crowd. We have implemented the approach in a database system, RandomDB, to show its efficiency and its flexibility over the competitor approaches. We used RandomDB by taking data from Twitter, Facebook and Flickr. The experiments show that these social networks are sources to generate reliable randomness and RandomDB a system that can be used for the task. Hopefully, our experience will drive the development of a series of applications that reuse the same data in several and different scenarios.
... It has been generally accepted that sequences and numbers generated by humans are far from being truly random 2 . Common biases of randomness recognition such as the " Hot Hand " (tendency to believe that a winning streak will usually continue), the inverse " Gambler's Fallacy " (tendency to believe that after a losing streak, the next attempt is more likely to be a success) and the related " Flip Bias " (tendency to believe 0 is likely to be followed by 1 and vice versa) have been thoroughly studied (and shown, statistically, to be fallacies) [21] [22] [23]. " Flip Bias " was shown to exist in randomness generation as well [24]. ...
... The result extends to digit generation as well. Figurska et al. showed, in a 2008 study, that humans tend to choose successive identical digits with probability 7.58% instead of 10% [25] 3 . Therefore it was not surprising that humans assessed human-generated sequences (created by other humans asked to generate sequences that they would expect from multiple coin tosses) as more random than trulyrandom sequences, actually generated by coin tosses [26]. ...
Conference Paper
Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhance the quality of the randomness in the system. This idea has two motivations: (i) results in experimental psychology indicate that humans are able to behave quite randomly when engaged in competitive games in which a mixed strategy is optimal, and (ii) people have an affection for games, and this leads to longer play yielding more entropy overall. While the resulting strings are not perfectly random, we show how to integrate such a game into a robust pseudo-random generator that enjoys backward and forward security. We construct a game suitable for randomness extraction, and test users playing patterns. The results show that in less than two minutes a human can generate 128 bits that are 2-64-close to random, even on a limited computer such as a PDA that might have no other entropy source. As proof of concept, we supply a complete working software for a robust PRG. It generates random sequences based solely on human game play, and thus does not depend on the Operating System or any external factor.
... Similarly, they would respond "agree" or "strongly agree" to a negated regular item ("I am not tall") as well as its negated polar opposite (negated reversed; "I am not short"). 9 The notion of random response patterns has been widely questioned because there are authors who have pointed out that individuals are not naturally capable of generating random numbers (Figurska et al., 2008;Neuringer, 1986), and careless responders tend to exhibit different levels of systematicity, even if they are directly instructed to respond randomly (see, Huang et al., 2012). Others describe a random pattern as a tendency to use all response categories without paying attention to the content of the items (DeSimone & Harms, 2018). ...
Article
Full-text available
This article explores the analysis and interpretation of wording effects associated with using direct and reverse items in psychological assessment. Previous research using bifactor models has suggested a substantive nature of this effect. The present study uses mixture modeling to systematically examine an alternative hypothesis and surpass recognized limitations in the bifactor modeling approach. In preliminary supplemental Studies S1 and S2, we examined the presence of participants who exhibited wording effects and evaluated their impact on the dimensionality of Rosenberg's Self-Esteem and the Revised Life Orientation Test, confirming the ubiquity of wording effects in scales containing direct and reverse items. Then, after analyzing the data for both scales (n = 5,953), we found that, despite a significant association between wording factors (Study 1), a low proportion of participants simultaneously exhibited asymmetric responses in both scales (Study 2). Similarly, despite finding both longitudinal invariance and temporal stability of this effect in three waves (n = 3,712, Study 3), a small proportion of participants was identified with asymmetric responses over time (Study 4), reflected in lower transition parameters compared to the other patterns of profiles examined. In both cases, we illustrate how bifactor models capitalize on the responses of individuals who do not even exhibit wording effects, yielding spurious correlations suggesting a substantive nature of the wording effect. These findings support the notion of an ephemeral nature underlying wording effects. The discussion focuses on alternative hypotheses to understand these findings and emphasizes the utility of including reverse items in psychological assessment. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
... Redundancy in this task correlates with neuropsychological problems that attenuate these aforementioned executive functions. Conversely, the creation of a good random number arrangement is indicative of good mental health 21,22 . ...
Article
Full-text available
Research in telemedicine has made it possible to capture regulatory measures to find biomarkers of human behavior during smartphone use called digital phenotypes. The identification and evaluation of these biomarkers for health diagnosis provide gains for an area related to telemedicine, precision medicine. It was developed a mobile application called Neuropesquisa, which has features to find these biomarkers while users complete psychological scales for mental health. The aim was to correlate mindfulness, anxiety and reaction time, and track possible digital phenotypes of users. It was carried out an observational study, with a correlational, cross-sectional and remote design with 364 adults, through Neuropesquisa. This study found positive and significant correlations between mindfulness and reaction time, and negative and positive correlations between anxiety and reaction time. It was concluded that Neuropesquisa was able to identify digital phenotypes among the considered constructs, of relevant importance for precision medicine and mental health.
... Mimicking a random process is a mentally difficult task, requiring sustained and focused attention [16]. Humans cannot consciously generate random number sequences [21]. ...
Article
Full-text available
A psychology experiment examining decision-making on a continuum of subjectively equivalent alternatives (directions) revealed that subjects follow a common pattern, giving preference to just a few directions over all others. When restricted experimental settings made the common pattern unfeasible, subjects demonstrated no common choice preferences. In the latter case, the observed distribution of choices made by a group of subjects was close to normal. We conclude that the abundance of subjectively equivalent alternatives may reduce the individual variability of choices, and vice versa. Choice overload paradoxically results in behavior patterning and eventually facilitates decision predictability, while restricting the range of available options fosters individual variability of choice, reflected in almost random behavior across the group.
... are incapable of producing true random sequences (Figurska et al., 2008). But the distinction between careless and random responses may point to the difference between intentional and nonintentional response behavior. ...
Article
Full-text available
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus items), auxiliary or paradata (e.g., response times), or data-driven statistical techniques (e.g., Mahalanobis distance). In the present study, gradient boosted trees, a state-of-the-art machine learning technique, are introduced to identify careless respondents. The performance of the approach was compared with established techniques previously described in the literature (e.g., statistical outlier methods, consistency analyses, and response pattern functions) using simulated data and empirical data from a web-based study, in which diligent versus careless response behavior was experimentally induced. In the simulation study, gradient boosting machines outperformed traditional detection mechanisms in flagging aberrant responses. However, this advantage did not transfer to the empirical study. In terms of precision, the results of both traditional and the novel detection mechanisms were unsatisfactory, although the latter incorporated response times as additional information. The comparison between the results of the simulation and the online study showed that responses in real-world settings seem to be much more erratic than can be expected from the simulation studies. We critically discuss the generalizability of currently available detection methods and provide an outlook on future research on the detection of aberrant response patterns in survey research.
... One practical digit test relies on the arguable hypothesis that human beings cannot generate random numbers naturally. Although some researchers, including Persaud [26] (but criticized by Figurska et al. [27]), deny the hypothesis, this assumption plays a practical role in detecting digit manipulation. Mosimann et al. [8], [28] show that people are often only careful when selecting the leftmost digits to fit an intended magnitude but pay less attention to the remaining digitsparticularly the rightmost digits-causing the digits to lose their uniform distribution. ...
Article
Full-text available
This paper presents a method for detecting and restoring integer datasets that have been manipulated by operations involving nonintegral real-number multiplication and rounding. As we discuss in the paper, detecting and restoring such manipulated integer datasets is not straightforward, nor are there any known solutions. We introduce the manipulation process, which was motivated by an actual case of fraud, and survey several areas of literature dealing with the possibility that manipulation may have happened or might occur. From our mathematical analysis of the manipulation process, we can prove that the nonintegral real number (α) used in the multiplication exists not as a single real number but as an interval containing infinitely many real numbers, any of which could have been used to produce the same manipulation result. Based on these analytic findings, we provide an algorithm that can detect and restore manipulated integer datasets. To validate our algorithm, we applied it to 40,000 test datasets that were randomly generated using controllable parameters that matched the real fraud case. Our results indicated that the algorithm detected and perfectly restored all datasets for which the value of the nonintegral real number was at least 16 (α ≥ 16) and the number of data entries was at least 40 (n ≥ 40).
... Sometimes this has been described as random responding (Beach, 1989;Berry et al., 1992). However, this is misleading because these responses can also follow some non-random pattern (e.g., a recurring sequence of 1-2-3-4-5) and, more importantly, humans are incapable of producing true random sequences (Figurska et al., 2008). But the distinction between careless and random responses may point to the difference between intentional and non-intentional response behavior. ...
Preprint
Full-text available
Careless responding is considered a bias in survey responses without regard to the actual item content which constitutes a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus items), auxiliary or paradata (e.g., response times), or data-driven statistical techniques (e.g., Mahalanobis distance). In the present study, gradient boosted trees, a state-of-the art machine learning technique, are introduced to identify carleess responders. The performance of the approach was compared to established techniques previously described in the literature (e.g., statistical outlier methods, consistency analyses, and response pattern functions) using simulated data and empirical data from a web-based study, in which diligent vs. careless response behavior were induced. The comparison between the results of the simulation and the online study showed that simulations that rely on prototypical pattern of careless responses tend to overestimate the classification accuracy. Gradient boosted trees outperform traditional detection mechanisms in flagging aberrant responses, especially by including response times as paradata, but are not to be misunderstood as a panacea of data cleaning. We critically discuss the results with regard to their generalizability and provide recommendations for the detection of aberrant response patterns in survey research.
... Sometimes this has been described as random responding (Beach, 1989;Berry et al., 1992). However, this is misleading because these responses can also follow some non-random pattern (e.g., a recurring sequence of 1-2-3-4-5) and, more importantly, humans are incapable of producing true random sequences (Figurska et al., 2008). But the distinction between careless and random responses may point to the difference between intentional and non-intentional response behavior. ...
Preprint
Full-text available
Careless responding is considered a bias in survey responses without regard to the actual item content which constitutes a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus items), auxiliary or paradata (e.g., response times), or data-driven statistical techniques (e.g., Mahalanobis distance). In the present study, gradient boosted trees, a state-of-the art machine learning technique, are introduced to identify carleess responders. The performance of the approach was compared to established techniques previously described in the literature (e.g., statistical outlier methods, consistency analyses, and response pattern functions) using simulated data and empirical data from a web-based study, in which diligent vs. careless response behavior were induced. The comparison between the results of the simulation and the online study showed that simulations that rely on prototypical pattern of careless responses tend to overestimate the classification accuracy. Gradient boosted trees outperform traditional detection mechanisms in flagging aberrant responses, especially by including response times as paradata, but are not to be misunderstood as a panacea of data cleaning. We critically discuss the results with regard to their generalizability and provide recommendations for the detection of aberrant response patterns in survey research.
... The importance of random numbers has made its generation a major research focus. The supposition that randomness occurs in nature is the basis for many theories in science and it constitutes the bedrock of quantum mechanics, as such, it can be assumed that natural occurrences like radioactive decay can be used to generate random numbers [8]. The use of computers and software to generate random numbers has been seen in algorithms such as the Mersenne twister [18] and Algorithm AS 183 [31]. ...
... Although this is a different aspect of randomness than that of the process itself, QRNGs should also provide an advantage here: PRNGs are guaranteed to produce computable sequences in stark contrast to the incomputability of QRNGs [9][10][11][12]. Standard tests, however, have focused on intuitive aspects of randomness, such as the frequencies of certain (strings of) bits, but human intuition about randomness is notoriously poor [13,14] and many other symptoms of randomness remain untested. Indeed, the randomness of strings and sequences is an incomputable property and thus cannot be verified completely; moreover, it is characterised by an infinity of properties [15]. ...
Article
Full-text available
The advantages of quantum random number generators (QRNGs) over pseudo-random number generators (PRNGs) are normally attributed to the nature of quantum measurements. This is often seen as implying the superiority of the sequences of bits themselves generated by QRNGs, despite the absence of empirical tests supporting this. Nonetheless, one may expect sequences of bits generated by QRNGs to have properties that pseudo-random sequences do not; indeed, pseudo-random sequences are necessarily computable, a highly nontypical property of sequences. In this paper, we discuss the differences between QRNGs and PRNGs and the challenges involved in certifying the quality of QRNGs theoretically and testing their output experimentally. While QRNGs are often tested with standard suites of statistical tests, such tests are designed for PRNGs and only verify statistical properties of a QRNG, but are insensitive to many supposed advantages of QRNGs. We discuss the ability to test the incomputability and algorithmic complexity of QRNGs. While such properties cannot be directly verified with certainty, we show how one can construct indirect tests that may provide evidence for the incomputability of QRNGs. We use these tests to compare various PRNGs to a QRNG, based on superconducting transmon qutrits and certified by the Kochen-Specker theorem, to see whether such evidence can be found in practice. While our tests fail to observe a strong advantage of the quantum random sequences due to algorithmic properties, the results are nonetheless informative: some of the test results are ambiguous and require further study, while others highlight difficulties that can guide the development of future tests of algorithmic randomness and incomputability.
... RNGs are usually tested by conducting batteries of tests on (finite) sequences they have produced [46,53]. Traditionally, such tests have focused on intuitive aspects of randomness, such as the frequencies of certain (strings of) bits, but human intuition about randomness is notoriously poor [14,32] and many other symptoms of randomness remain untested. Indeed, the randomness of strings and sequences is an incomputable property and thus cannot be verified completely; moreover, it is characterised by an infinity of properties [20]. ...
Preprint
Full-text available
The advantages of quantum random number generators (QRNGs) over pseudo-random number generators (PRNGs) are normally attributed to the nature of quantum measurements. This is often seen as implying the superiority of the sequences of bits themselves generated by QRNGs, despite the absence of empirical tests supporting this. Nonetheless, one may expect sequences of bits generated by QRNGs to have properties that pseudo-random sequences do not; indeed, pseudo-random sequences are necessarily computable, a highly nontypical property of sequences. In this paper, we discuss the differences between QRNGs and PRNGs and the challenges involved in certifying the quality of QRNGs theoretically and testing their output experimentally. While QRNGs are often tested with standard suites of statistical tests, such tests are designed for PRNGs and only verify statistical properties of a QRNG, but are insensitive to many supposed advantages of QRNGs. We discuss the ability to test the incomputability and algorithmic complexity of QRNGs. While such properties cannot be directly verified with certainty, we show how one can construct indirect tests that may provide evidence for the incomputability of QRNGs. We use these tests to compare various PRNGs to a QRNG, based on superconducting transmon qutrits, certified by the Kochen-Specker Theorem. While our tests fail to observe a strong advantage of the quantum random sequences due to algorithmic properties, the results are nonetheless informative: some of the test results are ambiguous and require further study, while others highlight difficulties that can guide the development of future tests of algorithmic randomness and incomputability.
... Similar to a previous study , random individual key presses were not used as we intended to employ a control task that was uncued and thus internally generated and explicitly known, similar to the MSL task. The use of self-generated random sequences was not possible either, as it has been shown that people are not able to reliably produce random sequences of movements (Figurska et al., 2008). Subjects were first instructed to press all four keys simultaneously following the rhythm of an auditory tone (presented monotonically at 3 Hz) as long as a green cross was displayed on the screen. ...
Article
Full-text available
Sleep is necessary for the optimal consolidation of newly acquired procedural memories. However, the mechanisms by which motor memory traces develop during sleep remain controversial in humans, as this process has been mainly investigated indirectly by comparing pre- and post-sleep conditions. Here, we used functional magnetic resonance imaging and electroencephalography during sleep following motor sequence learning to investigate how newly- formed memory traces evolve dynamically over time. We provide direct evidence for transient reactivation followed by downscaling of functional connectivity in a cortically-dominant pattern formed during learning, as well as gradual reorganization of this representation toward a subcortically-dominant consolidated trace during non-rapid eye movement (NREM) sleep. Importantly, the putamen functional connectivity within the consolidated network during NREM sleep was related to overnight behavioral gains. Our results demonstrate that NREM sleep is necessary for two complementary processes: the restoration and reorganization of newly-learned information during sleep, which underlie human motor memory consolidation.
... But it is widely held that humans cannot produce random numbers-unavoidably there are psychologically/mentally meaningful patterns, associations. So entrenched are these that even very simple empirical experiments needing random order sequences must resort to computational random number generators, often based on entropy-based phenomena or atmospheric noise (see, e.g., Figurska, Stańczyk, and Kulesza 2008). ...
... A bank customer receiving a new PIN has other options, in addition to memorising the PIN: first to change it and second to record it (see Fig 1). Changing clearly weakens the mechanism because humans are incapable of randomness [1], and this propensity will extend to PIN choice. Recording the PIN exacerbates the danger posed by theft, even more so if the record is carried with the banking card itself or recorded somewhere obvious. ...
... TRNGs can be designed to work using atmospheric noise, radioactive decay, bio-signals, etc. Persaud [8] had concluded that humans can consciously generate random number sequences. But the same experimentation procured drastic results which violated the hypothesis rendered by Persuad [9]. Studies reveal that subjects with mental disorders have prejudicial ability to generate random numbers [10,11,12].This should not result in drawing conclusions which adhere that healthy humans are good sources of random numbers. ...
... In this study the authors found that humans can generate only very limited randomness and that they cannot substantially increase the degree of motion randomness through training. In contrast, behavioral studies in psychology have indicated that the randomness of human-generated random number sequences might be dependent on the feedback provided to human subjects (Neuringer, 1986;Persaud, 2005;Figurska et al., 2008). ...
Article
Full-text available
Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects' self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories.
... Second, computations in their protocol are randomized (e.g., the human occasionally flips his answer), while the computations in our protocol are deterministic. This is significant because humans are not good at consciously generating random numbers [54,30,44] (e.g., noisy parity could be easy to learn when humans are providing source of noise). It also means that their protocol would need to be modified in our setting so that the untrusted third party could validate a noisy response using only a cryptographic hash of the answer -invoking error correcting codes would increase the number of rounds needed to provide an acceptable level of security. ...
Conference Paper
Full-text available
Secure cryptographic protocols to authenticate humans typically assume that the human will receive assistance from trusted hardware or software. One interesting challenge for the cryptographic community is to build authentication protocols that are so simple that a human can execute them without relying on assistance from a trusted computer. We propose several candidate human authentication protocols in a setting in which the user can only receive assistance from a semi-trusted computer --- a computer that can be trusted to store information and perform computations correctly, but cannot be trusted to ensure privacy. In our schemes, a semi-trusted computer is used to store and display public challenges $C_i\in[n]^k$. The user memorizes a random secret mapping $\sigma:[n]\rightarrow \{0,\ldots,d-1\}$ and authenticates by computing responses $f(\sigma(C_i))$ to a sequence of public challenges, where $f:\{0,...,d-1\}^k\rightarrow \{0,...,d-1\}$ is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample $m=\tilde{\Omega}(n^{s(f)})$ challenge-response pairs to recover $\sigma$ for a security parameter $s(f)$ that depends on two key properties of $f$. Our statistical dimension lower bounds apply to arbitrary functions --- not just functions that are easy for a human to evaluate --- and may be of independent interest. For our particular schemes, we show that forging passwords is equivalent to recovering the secret mapping. We also show that $s(f_1) = 1.5$ for our first scheme and that $s(f_2) = 2$ in our second scheme. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts (e.g., 100). We also issue a public challenge to the cryptography community to crack passwords that were generated using our human computable password schemes.
... Efforts to use humans for purposes of random number generation (RNG) are not as successful as computer pseudo-RNG. In an earlier work, Persaud 2 claimed that by simply asking his subjects to generate and dictate numbers, humans can generate sequences that are uniformly distributed, independent of one another and unpredictable, a claim refuted by Bains 3 and Figurska et al. 4 The resulting sequences they obtained were in fact substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number. This failure is expected because the human mind is exceptionally superb at recognizing patterns. ...
Article
Full-text available
Humans are deemed ineffective in generating a seemingly random number sequence primarily because of inherent biases and fatigue. Here, we establish statistically that human-generated number sequence in the presence of visual cues considerably reduce one's tendency to be fixated to a certain group of numbers allowing the number distribution to be statistically uniform. We also show that a stitching procedure utilizing auditory cues significantly minimizes human's intrinsic biases towards doublet and sequential ordering of numbers. The article provides extensive experimentation and comprehensive pattern analysis of the sequences formed when humans are tasked to generate a random series using numbers "0" to "9." In the process, we develop a statistical framework for analyzing the apparent randomness of finite discrete sequences via numerical measurements.
... However, even if the user is told to select a the character uniformly at random it is still impossible to make any formal security guarantees without understanding the entropy of a humanly generated random sequence. We have difficulty consciously generating a random sequence of numbers even when they are not trying to construct a memorable sequence [71] [51] [36]. This does not rule out the possibility that human generated random sequence could provide a weak source of entropy [42] — which could be used to extract a truly random sequence with computer assistance [60, 35]. ...
Conference Paper
Full-text available
We introduce quantitative usability and security models to guide the design of password management schemes --- systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user's visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals.
... The result extends to digit generation as well. Figurska et al. showed, in a 2008 study, that humans tend to choose successive identical digits with probability 7.58% instead of 10% [23] 3 . Therefore it was not surprising that humans assessed human-generated sequences (created by other humans asked to generate sequences that they would expect from multiple coin tosses) as more random than trulyrandom sequences, actually generated by coin tosses [24]. ...
Article
Two computer scientists have created a video game about mice and elephants that can make computer encryption properly secure---as long as you play it randomly.
Article
Introduction The fast, intuitive and autonomous system 1 along with the slow, analytical and more logical system 2 constitute the dual system processing model of decision making. Whether acting independently or influencing each other both systems would, to an extent, rely on randomness in order to reach a decision. The role of randomness, however, would be more pronounced when arbitrary choices need to be made, typically engaging system 1. The present exploratory study aims to capture the expression of a possible innate randomness mechanism, as proposed by the authors, by trying to isolate system 1 and examine arbitrary decision making in autistic participants with high functioning Autism Spectrum Disorders (ASD). Methods Autistic participants withhigh functioning ASD and an age and gender matched comparison group performed the random number generation task. The task was modified to limit the contribution of working memory and allow any innate randomness mechanisms expressed through system 1, to emerge. Results Utilizing a standard analyses approach, the random number sequences produced by autistic individuals and the comparison group did not differ in their randomness characteristics. No significant differences were identified when the sequences were examined using a moving window approach. When machine learning was used, random sequences’ features could discriminate the groups with relatively high accuracy. Conclusions Our findings indicate the possibility that individual patterns during random sequence production could be consistent enough between groups to allow for an accurate discrimination between the autistic and the comparison group. In order to draw firm conclusions around innate randomness and further validate our experiment, our findings need to be replicated in a bigger sample.
Article
How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high‐profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time‐consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer‐review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.
Article
Faced with a rapidly evolving virus, inventors must seek to experiment, iterate and deploy both creative and effective solutions. Supported by empirical model-driven analysis, this paper delves into fundamental paradoxes and biases in the context of epidemic research, increasing awareness at every stage of the clinical trial; ranging from hypothesizing to sampling, and analyses to fake data detection. Critically, the paper also presents novel ideas that demonstrate how the paradoxes and biases covered play into technology development and deployment to combat the surging pandemic, COVID-19.
Article
Reminding people to behave honestly or asking them to actively commit to honest behavior is an easily implementable intervention to reduce dishonesty. Earlier research has shown that such truth pledges affect lying behavior on a group level. In this study we are analyzing how a truth pledge changes the distribution of lying types which have been established in the literature, i.e. truth tellers, partial liars and extreme liars, to better understand whether truth pledges can affect the decision to lie or merely the extent of lies. For this purpose, we conduct a 2 × 2 experiment with 484 participants in which we apply a truth pledge in a gain and a loss frame. We introduce a novel “Even-Odd task” for online lying experiments, which is based on the well-established coin-toss design. The Even-Odd task takes into account that unbiased, physical randomization devices are not always available in online settings, which can be a problem for truth-tellers if they are bad mental randomizers. We therefore ask participants to think of privately known numbers (house numbers, phone numbers) and then determine randomly whether even or uneven numbers result in the higher payment. We find that the truth pledge significantly reduces lying but also that this effect is strongest for extreme liars. The uneven shift in the distribution of liars suggests that truth pledges are effective in decreasing the size of lies but not the number of lies told. This result is robust for both frames.
Thesis
Full-text available
Purpose: The purpose of the study is to evaluate block chain technologies for non-monetary use cases, specifically in health care and legal record keeping. The study examines whether the technology can be used beyond censorship-resistant payments using non-monetary transactions. The study compares different block chain types and how they can be applied. Design/methodology/approach: The study uses qualitative research with semi-structured interviews. Ten experts were interviewed from the fields of health care, legal permits and block chain. Findings: Health care is not suited for block chain. For the moment, it is unlikely such technology to be used in health care. Unless an ordered data structure is required or there is an ability to use a third party, there is no need for a block chain. Financial services benefit the most from having an ordered data structure. Block chains are still most useful in creating economic values. Legal record keeping and health care would only work with an off-chain trust and a proper identity solution. All use cases apart from value exchange must rely on a third party trust, because the value is already on the block chain in the form of cryptocurrency. Financial services are best suited for block chain for now. Research limitations/implications: Lack of empirical data because not many use block chains besides payments. Time constraints in expanding the research to DLTs and other block chain based solutions. Originality/value: The study can be applied to other non-monetary use cases. It is not restricted to only health care and legal record keeping.
Conference Paper
PINs have been around for half a century and many insecure PIN-related practices are used. We attempted to mitigate by developing two new PIN memorial assistance mechanisms that we tested in an online study. We were not able to show an improvement in memorability, mostly because people did not use the memorial aids. We realised that a greater insight into PIN Management mental models was needed, in order the better to formulate mitigation approaches. We proceeded to study PIN-related mental models, and we present our findings in this paper. The insights we gained convinced us that security researchers should not presume that people want, or need, our advice or help in any security context; they might well prefer to continue with their usual trusted practices. Yet advice should indeed still be offered, for those who do want it, and we make some suggestions about what this advice should look like in the PIN context.
Article
The capacity for random movement production is known to be limited in humans (e.g., Newell, Deutsch, & Morrison, 2000). We examined the effects of a brief mindfulness induction on random movement production because there are useful implications for variability in solving movement-related problems. The main task involved randomly clicking the 9 boxes in a 3 × 3 grid presented on a computer screen for five minutes. We characterized the sequence of clicking in terms of degrees of randomness, or periodicity, based on the fit, or probability, of the experimental data with its best fitting Bayesian network (4-click memory nodes) using the Markov chain Monte Carlo (MCMC) approach. Sixty-three participants were randomly assigned to either the experimental or the control condition. Mixed design repeated-measures ANOVA results show that the short mindfulness induction had a positive effect on the randomness of the sequence subsequently produced. This finding suggests that mindfulness may be a suitable strategy for increasing random movement behavior.
Article
Full-text available
Random number generation is one of the human abilities. It is proven that the sequence of random numbers generated by people do not follow full randomness criteria. These numbers produced by brain activity seem to be completely nonstationary. In this paper, we show that there is a distinction between the random numbers generated by different people who provide the discrimination capability, and can be used as a biometric signature. We considered these numbers as a signal, and their complexity for various time-frequency sections was calculated. Then with a proper structure of a support vector machine, we classify the features. The error rate, obtained in this study, shows high discrimination capabilities for this biometric characteristic.
Article
Full-text available
To test the hypothesis that, during random motor generation, the spatial contingencies inherent to the task would induce additional preferences in normal subjects, shifting their performances farther from randomness. By contrast, perceptual or executive dysfunction could alter these task related biases in patients with brain damage. Two groups of patients, with right and left focal brain lesions, as well as 25 right handed subjects matched for age and handedness were asked to execute a random choice motor task--namely, to generate a random series of 180 button presses from a set of 10 keys placed vertically in front of them. In the control group, as in the left brain lesion group, motor generation was subject to deviations from theoretical expected randomness, similar to those when numbers are generated mentally, as immediate repetitions (successive presses on the same key) are avoided. However, the distribution of button presses was also contingent on the topographic disposition of the keys: the central keys were chosen more often than those placed at extreme positions. Small distances were favoured, particularly with the left hand. These patterns were influenced by implicit strategies and task related contingencies. By contrast, right brain lesion patients with frontal involvement tended to show a more square distribution of key presses--that is, the number of key presses tended to be more equally distributed. The strategies were also altered by brain lesions: the number of immediate repetitions was more frequent when the lesion involved the right frontal areas yielding a random generation nearer to expected theoretical randomness. The frequency of adjacent key presses was increased by right anterior and left posterior cortical as well as by right subcortical lesions, but decreased by left subcortical lesions. Depending on the side of the lesion and the degree of cortical-subcortical involvement, the deficits take on a different aspect and direct repetions and adjacent key presses have different patterns of alterations. Motor random generation is therefore a complex task which seems to necessitate the participation of numerous cerebral structures, among which those situated in the right frontal, left posterior, and subcortical regions have a predominant role.
Article
Full-text available
Random number generation is an attention-demanding task that engages working memory and executive processes. Random number generation requires holding information 'on line', suppression of habitual counting, internally driven response generation and monitoring of responses. Evidence from PET studies suggests that the dorsolateral prefrontal cortex (DLPFC) is involved in the generation of random responses. We examined the effects of short trains of transcranial magnetic stimulation (TMS) over the left or right DLPFC or medial frontal cortex on random number generation in healthy normal participants. As in previous evidence, in control trials without stimulation participants performed poorly on the random number generation task, showing repetition avoidance and a tendency to count. Brief disruption of processing with TMS over the left DLPFC changed the balance of the individuals' counting bias, increasing the most habitual counting in ones and reducing the lower probability response of counting in twos. This differential effect of TMS over the left DLPFC on the balance of the subject's counting bias was not obtained with TMS over the right DLPFC or the medial frontal cortex. The results suggest that, with disruption of the left DLPFC with TMS, habitual counting in ones that has previously been suppressed is released from inhibition. From these findings a network modulation model of random number generation is proposed, whereby suppression of habitual responses is achieved through the modulatory influence of the left DLPFC over a number-associative network in the superior temporal cortex. To allow emergence of appropriate random responses, the left DLPFC inhibits the superior temporal cortex to prevent spreading activation and habitual counting in ones.
Article
Full-text available
This study investigated the effects of age on a random generation task. In Experiment 1, young and elderly subjects were asked to generate random strings of letters at 1-, 2-, and 4-s rates. The elderly subjects produced more alphabetical stereotype responses than young subjects, even in the slowest rate condition. Furthermore, as faster rates were imposed, elderly subjects could no longer maintain the pace and missed responses. In Experiment 2, subjects were required to generate letters at the same time that they sorted cards into one, two, four, or eight categories. Age-related differences were observed on most of the measures of randomness (stereotypes, zero-order, and first-order measures). In addition, the number of errors increased with the number of sorting alternatives, especially for elderly subjects. These results suggest the existence of a reduction of the central executive resources, along with a reduced inhibition ability, in the elderly subjects. However, the contribution of a perceptual speed factor is also discussed.
Article
The current state of A.D. Baddeley and G.J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.
Article
Cryptography is concerned with the conceptualization, definition and construction of computing systems that address security concerns. The design of cryptographic systems must be based on firm foundations. This book presents a rigorous and systematic treatment of the foundational issues: defining cryptographic tasks and solving new cryptographic problems using existing tools. It focuses on the basic mathematical tools: computational difficulty (one-way functions), pseudorandomness and zero-knowledge proofs. The emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving cryptographic problems, rather than on describing ad-hoc approaches. The book is suitable for use in a graduate course on cryptography and as a reference book for experts. The author assumes basic familiarity with the design and analysis of algorithms; some knowledge of complexity theory and probability is also useful.
Article
The central executive component of working memory is a poorly specified and very powerful system that could be criticized as little more than a homunculus. A research strategy is outlined that attempts to specify and analyse its component functions and is illustrated with four lines of research. The first concerns the study of the capacity to coordinate performance on two separate tasks. A second involves the capacity to switch retrieval strategies as reflected in random generation. The capacity to attend selectively to one stimulus and inhibit the disrupting effect of others comprises the third line of research, and the fourth involves the capacity to hold and manipulate information in long-term memory, as reflected in measures of working memory span. It is suggested that this multifaceted approach is a fruitful one that leaves open the question of whether it will ultimately prove more appropriate to regard the executive as a unified system with multiple functions, or simply as an agglomeration of independent though interacting control processes. In the meantime, it seems useful to continue to use the concept of a central executive as a reminder of the crucially important control functions of working memory.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
Response selection was studied independently of the stimulus by asking subjects to generate random sequences of letters or numbers. Experiment 1 varied rate of letter generation from ½ sec. to 4 sec. per item and showed that the redundancy of the sequence increased linearly with rate. Experiment 2 added random generation of letters as a secondary task to paced card sorting. Information load per card was varied from 1 through 2 to 4 to 8 alternatives, with sorting rate held constant. As predicted, the redundancy of the sequences generated increased linearly with sorting load. Experiment 3 varied number of items to be randomized. Rate of random generation increased systematically from 2 to 4 to 8 alternatives, but levelled out beyond this point, showing no difference between 16 and 26. In general, these results suggest a response-selection mechanism of limited informational capacity.
Article
Korsakoff's syndrome often affects "executive" functions [Baddeley, A. Human Memory, Theory and Practice, 1990], which in anatomical terms are associated with the frontal lobes. However, in a previous study, Wiegersma, S. and de Jong, E. [J. clin. exp. Neuropsychol. 13, 847-853, 1991] failed to observe a diminished performance on the random generation task, although this task is thought to be sensitive to "executive" deficits. In the present study, we sought to replicate and clarify these earlier findings of Wiegersma and de Jong with a group of Korsakoff patients in whom frontal lobe dysfunction was indicated by a reduced performance on fluency tasks. Patients and controls were presented with three tasks. Digit span was used as an index of short-term memory capacity; memory search and comparison processes were measured with the missing item scan; and the randomisation task was used to assess the ability to produce non-routine, random sequences. The results showed that the performance of Korsakoff patients declined on the randomisation task while short-term retention and scanning were intact. Analysis of the responses indicated that the Korsakoff patients are able to suppress the dominant response, but have problems in generating and carrying out alternative strategies in novel problem situations.
Article
The quality of attempts at generating a random sequence of the numbers 1-6 was studied in 30 patients with dementia of the Alzheimer type (DAT) and 30 elderly normal control (NC) subjects. Three main findings emerged: (1) DAT patients' subjective random sequences were more streotyped (contained fewer digit combinations) than those of NC subjects. (2) This difference in response stereotypy was due to patients' enhanced tendency to arrange consecutive numbers in an ascending series ('counting bias'). (3) In the patient group, degree of sequential nonrandomness was positively correlated with overall severity of dementia and with the extent to which performance on neuropsychological tests specifically assessing executive functions (fluency, naming, error monitoring) was impaired. These results illustrate a loss of behavioral complexity in the course of dementia and are interpreted as reflecting a frontal dysexecutive syndrome in DAT.
Article
In producing random numbers, subjects typically deviate systematically from statistical randomness. It is considered that these biases reflect constraints imposed by underlying structures and processes, rather than a deficient concept of randomness. Random number generation (RNG) places considerable demands on executive processes, and provides a possibly useful tool for their investigation. A group of patients with Parkinson's disease (PD) and a group of controls were tested on a RNG task, both alone and with a concurrent attention-demanding task (manual tracking). Both groups showed the biases in RNG described previously, including a strong counting tendency and repetition avoidance. Overall RNG performance did not differ between the groups, although differences were found in the counting biases in the patient and control groups, with the controls showing a bias towards counting in twos, and the patients a bias towards counting in ones. The secondary task reversed the bias shown by controls and exacerbated the bias in the patients. A network modulation model may help explain many of the features of RNG. We suggest that naturally biased output from an associative network must be actively suppressed by an attention-demanding, limited-capacity process. This suppression may be disrupted by the pathophysiology of PD and by concurrent tasks. Convergent evidence from various sources is discussed which supports a role of the dorsolateral prefrontal cortex (DLPFC) in this process.
Article
Evidence from PET studies suggests that the dorsolateral prefrontal cortex (DLPFC) is involved in generation of random responses. We used TMS to examine the specific role of this area in random generation of responses, a task which requires holding information 'on line', suppression of habitual or stereotyped response patterns, intrinsic response generation, monitoring of responses and modification of production strategies. From the results of a previous study of the effects of TMS on random number generation, we proposed a network modulation model, whereby suppression of habitual responses is considered a key process of random response generation and is achieved through the modulatory influence of the left DLPFC over an associative network distributed in the superior temporal cortex. The aim of the present study was to further investigate the generality of this model by examining the effects of short trains of TMS over the left or right DLPFC or medial frontal cortex on random letter generation in healthy participants. TMS over the left DLPFC significantly increased non-randomness relative to control no stimulation trials, which was not obtained with TMS over the right DLPFC or medial frontal cortex. The results suggest the generality of network modulation model of random response generation.
Article
The current state of A. D. Baddeley and G. J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.
Article
In a random number generation (RNG) task subjects are instructed to generate the numbers 1-10 in a random fashion. RNG performance is assumed to involve executive functions, as it requires controlled response generation and suppression of habitual responses. To investigate cerebral structures involved in RNG associated executive functions we investigated functional magnetic resonance imaging in eight healthy subjects while performing an RNG task at two different response rates (1 and 2 Hz). During the 1 Hz condition an activation was detected bilaterally in the dorsolateral prefrontal cortex (BA 9/46), the lateral premotor cortex (BA 6), the anterior cingulate (BA 32), the inferior and superior parietal cortex (BA 7/40) and the cerebellar hemispheres. In the 2 Hz condition behavioural data showed higher counting tendencies reflecting poorer executive control. In parallel, a homogenous diminution of the activity in the involved cortical areas was obtained. This finding would support the theory of a cortical network involved in executive functions consisting of distinct brain regions working together rather than a distinct fronto-cortical functional localisation.
Article
The generation of random sequences is considered to tax different executive functions. To explore the involvement of these functions further, brain potentials were recorded in 16 healthy young adults while either engaging in random number generation (RNG) by pressing the number keys on a computer keyboard in a random sequence or in ordered number generation (ONG) necessitating key presses in the canonical order. Key presses were paced by an external auditory stimulus to yield either fast (1 press/800 ms) or slow (1 press/1300 ms) sequences in separate runs. Attentional demands of random and ordered tasks were assessed by the introduction of a secondary task (key-press to a target tone). The P3 amplitude to the target tone of this secondary task was reduced during RNG, reflecting the greater consumption of attentional resources during RNG. Moreover, RNG led to a left frontal negativity peaking 140 ms after the onset of the pacing stimulus, whenever the subjects produced a true random response. This negativity could be attributed to the left dorsolateral prefrontal cortex and was absent when numbers were repeated. This negativity was interpreted as an index for the inhibition of habitual responses. Finally, in response locked ERPs a negative component was apparent peaking about 50 ms after the key-press that was more prominent during RNG. Source localization suggested a medial frontal source. This effect was tentatively interpreted as a reflection of the greater monitoring demands during random sequence generation.
Article
Random number generation (RNG) requires executive control. A novel paradigm using the eight drum pads of an electronic drum set as an input device was used to test 15 healthy subjects who engaged in random or ordered number generation (ONG). Brain potentials time-locked to the drum-beats revealed a more negative response during RNG compared to ONG which had a left frontal distribution. Source analysis pointed to Brodmann area 9, which has been reported previously in a PET study and is thought to be engaged in suppression of habitual responses such as counting up in steps of one during RNG. Lateralized readiness potentials reflecting the difference in activation of the contra and ipsilateral motor cortex were smaller during ONG reflecting the ability to preprogram such canonical sequences.
Article
Computer algorithms can only produce seemingly random or pseudorandom numbers whereas certain natural phenomena, such as the decay of radioactive particles, can be utilized to produce truly random numbers. In this study, the ability of humans to generate random numbers was tested in healthy adults. Subjects were simply asked to generate and dictate random numbers. Generated numbers were tested for uniformity, independence and information density. The results suggest that humans can generate random numbers that are uniformly distributed, independent of one another and unpredictable. If humans can generate sequences of random numbers then neural networks or forms of artificial intelligence, which are purported to function in ways essentially the same as the human brain, should also be able to generate sequences of random numbers. Elucidating the precise mechanism by which humans generate random number sequences and the underlying neural substrates may have implications in the cognitive science of decision-making. It is possible that humans use their random-generating neural machinery to make difficult decisions in which all expected outcomes are similar. It is also possible that certain people, perhaps those with neurological or psychiatric impairments, are less able or unable to generate random numbers. If the random-generating neural machinery is employed in decision making its impairment would have profound implications in matters of agency and free will.
Article
We survey the main paradigms, approaches and techniques used to conceptualize, de ne and provide solutions to natural cryptographic problems. We start by presenting some of the central tools; that is, computational diculty (in the form of one-way functions), pseudorandomness, and zero-knowledge proofs. Based on these tools, we turn to the treatment of basic applications such as encryption and signature schemes as well as the design of general secure cryptographic protocols.
Article
The research described in this abstract was initiated by discussions between the author and Giovanni Di Crescenzo in Barcelona in early 2004. It was during Advanced Course on Contemporary Cryptology that Di Crescenzo gave a course on zero knowledge protocols (ZKP), see [1]. After that course we started to play with unorthodox ideas for breaking ZKP, especially one based on graph 3-coloring. It was chosen for investigation because it is being considered as a "benchmark" ZKP, see [2], [3]. At this point we briefly recall such a protocol's description.
  • G Joppich
  • J Däuper
  • R Dengler
  • S Johannes
  • A Rodrigues-Fornells
  • T F Münte
G. Joppich, J. Däuper, R. Dengler, S. Johannes, A. Rodrigues-Fornells, and T.F. Münte, Brain potentials index executive functions during random number generation, Neuroscience Research, 49 :157-164, 2004.