Technical ReportPDF Available

Efficiently sampling vectors and coordinates from the n-sphere and n-ball

Authors:

Abstract

We provide a short proof that the uniform distribution of points for the n-ball is equivalent to the uniform distribution of points for the (n + 1)-sphere projected onto n dimensions. This implies the surprising result that one may uniformly sample the n-ball by instead uniformly sampling the (n + 1)-sphere and then arbitrarily discarding two coordinates. Consequently, any procedure for sampling coordinates from the uniform (n + 1)-sphere may be used to sample coordinates from the uniform n-ball without any modification. For purposes of the Semantic Pointer Architecture (SPA), these insights yield an efficient and novel procedure for sampling the dot-product of vectors—sampled from the uniform ball—with unit-length encoding vectors.
Efficiently sampling vectors and coordinates
from the n-sphere and n-ball
Aaron R. Voelker, Jan Gosmann, Terrence C. Stewart
Centre for Theoretical Neuroscience – Technical Report
January 4, 2017
Abstract
We provide a short proof that the uniform distribution of points for the n-ball is equivalent
to the uniform distribution of points for the (n+ 1)-sphere projected onto ndimensions. This
implies the surprising result that one may uniformly sample the n-ball by instead uniformly
sampling the (n+ 1)-sphere and then arbitrarily discarding two coordinates. Consequently,
any procedure for sampling coordinates from the uniform (n+1)-sphere may be used to sample
coordinates from the uniform n-ball without any modification. For purposes of the Semantic
Pointer Architecture (SPA), these insights yield an efficient and novel procedure for sampling
the dot-product of vectors—sampled from the uniform ball—with unit-length encoding vectors.
1 Introduction
The Semantic Pointer Architecture (SPA; Eliasmith,2013) is a cognitive architecture that has been
used to model what still remains the world’s largest functioning model of the human brain (Elia-
smith et al.,2012). Core to the SPA is the notion of a semantic pointer, which is a high-dimensional
vector that represents compressed semantic information. Consequently, the current compiler for the
SPA (Nengo; Bekolay et al.,2013) makes extensive use of computational procedures for uniformly
sampling vectors, either from the surface of the unit n-sphere (sRn+1 :ksk= 1) or from the
interior of the unit n-ball ({bRn:kbk<1}). Furthermore, when building specific models, we
sometimes sample the dot-product of these vectors with arbitrary unit-length vectors (Knight et al.,
2016). In summary, the SPA requires efficient algorithms for uniformly sampling high-dimensional
vectors and their coordinates (Gosmann and Eliasmith,2016).
To begin, it is worth stating a few facts. We use the term ‘coordinate’ to refer to an element
of some vector with respect to some basis. For uniformly distributed vectors from the n-ball or
n-sphere, the choice of basis for the coordinate system is arbitrary (and need not even stay fixed
between samples) – but it is helpful to consider the standard basis. Relatedly, the dot-product of
two vectors sampled uniformly from the n-sphere is equivalent to the distribution of any coordinate
of a vector sampled uniformly from the n-sphere. Similarly, the dot-product of a vector sampled
uniformly from the n-ball with a vector sampled uniformly from the n-sphere is equivalent to the
distribution of any coordinate of a vector sampled uniformly from the n-ball. These last two facts
hold simply because we may suppose one of the unit vectors is elementary after an appropriate
change of basis, in which case their dot-product extracts the corresponding coordinate.
Now there exist well-known algorithms for sampling points (i.e., vectors) from the n-sphere and
n-ball. We review these in sections §2.1 and §2.2 respectively. In §2.3 we briefly review how to
efficiently sample coordinates from the uniform n-sphere. Our main contribution is a proof in §3
that the n-ball may be uniformly sampled by arbitrarily discarding two coordinates from the (n+1)-
sphere. This result was previously discovered by Harman and Lacko (2010), specifically by setting
k= 2 in Corollary 1 and working through some details. We derived this result independently and
thus present it here in an explicit and self-contained manner. This leads to the development of
two algorithms: in §3.1 we provide an alternative algorithm for uniformly sampling points from
the n-ball, and in §3.2 we provide an efficient and novel algorithm for sampling coordinates from
the uniform n-ball by a simple reduction to the (n+ 1)-sphere.
1
2 Preliminaries
To help make this a self-contained reference, we summarize some previously known results:
2.1 Uniformly sampling the n-sphere
To uniformly sample points from the unit n-sphere, defined as sRn+1 :ksk= 1:
1. Independently sample n+ 1 normally distributed variables: x1, . . . , xn+1 N (0,1).1
2. Compute their `2-norm: r=qPn+1
i=1 x2
i.
3. Return the vector s= (x1, . . . , xn+1)/r.
This is implemented in Nengo as nengo.dists.UniformHypersphere(surface=True) with dimen-
sionality parameter d=n+ 1.
2.2 Uniformly sampling the n-ball
To uniformly sample points from the unit n-ball—defined as {bRn:kbk<1}—we use the
previous algorithm as follows:
1. Sample sRnfrom the (n1)-sphere.
2. Uniformly sample cU[0,1].
3. Return the vector b=c1/ns.
This is implemented in Nengo as nengo.dists.UniformHypersphere(surface=False) with di-
mensionality parameter d=n.
2.3 Uniformly sampling coordinates from the n-sphere
To sample coordinates from the unit n-sphere (i.e., uniform points from the sphere projected
onto an arbitrary unit vector) we could simply modify §2.1 to return only a single element – but
this would be inefficient for large n. Instead, we use nengo.dists.CosineSimilarity(n+ 1)
to directly sample the underlying distribution, via its probability density function (Voelker and
Eliasmith,2014; eq. 11):
f(x)1x2
n
2
1,
which may be expressed using the “SqrtBeta” distribution (Gosmann and Eliasmith,2016).2
3 Results
Lemma 1. Let nbe a positive integer, x1, . . . , xn+2 N (0,1) be independent and normally
distributed random variables, then:3
c1/n D
=pPn
i=1 x2
i
qPn+2
i=1 x2
i
,(1)
where cU[0,1] is a uniformly distributed random variable.
Proof. Let X=Pn
i=1 x2
iand Y=Pn+2
i=n+1 x2
i. Observe that Xχ2(n),Yχ2(2), and XY
(i.e., Xand Yare independent chi-squared variables with nand 2degrees of freedom, respectively).
Using relationships between the chi-squared/Beta/Kumaraswamy distributions, we know that:
X
X+Yβ(n/2,1) =X
X+YKumaraswamy (n/2,1) =X
X+Yn/2
U[0,1] .
Focusing on the final distribution, raise both sides to the exponent 1/n to obtain (1).
1The choice of variance for the normal distribution is an arbitrary constant.
2https://github.com/nengo/nengo/blob/614e7657afd1f16b296a06068f3d4673e5b575d2/nengo/dists.py#L431
3We use D
=to denote that two random variables have the same distribution.
2
Theorem 1. Let nbe a positive integer, bbe a random n-dimensional vector uniformly distributed
on the unit n-ball, sbe a random (n+ 2)-dimensional vector uniformly distributed on the unit
(n+ 1)-sphere, and finally PRn,n+2 be any rectangular orthogonal matrix,4then:
bD
=Ps.(2)
Proof. By §2.1,s= (x1, . . . , xn+2 )/r, where x1, . . . , xn+2 N (0,1) and r=qPn+2
i=1 x2
i. Also
let ˜r=pPn
i=1 x2
i. Since the uniform distribution for the sphere (and for the ball) is isomorphic
under change of basis, we may assume without loss of generality that Pis the (n+ 2)-dimensional
identity with its last two rows removed:
Ps D
= (x1, . . . , xn)/r
= (˜r/r) (x1, . . . , xn)/˜r
D
=c1/n (x1, . . . , xn)/˜r(where cU[0,1] by Lemma 1)
D
=b(by §2.2).
3.1 Uniformly sampling the n-ball (alternative)
As a corollary to Theorem 1, we obtain the following alternative to §2.2 for the n-ball:
1. Sample sRn+2 from the (n+ 1)-sphere.
2. Return the vector b= (s1,...,sn).
3.2 Uniformly sampling coordinates from the n-ball
To efficiently sample coordinates from the uniform n-ball (i.e., uniform points from the ball pro-
jected onto an arbitrary unit vector), observe that in §3.1 the elements of bcorrespond directly to
elements of s. In other words, sampling coordinates from the uniform n-ball reduces to sampling
coordinates from the uniform (n+ 1)-sphere. Therefore, we simply reuse the method from §2.3 to
sample coordinates from the (n+ 1)-sphere: nengo.dists.CosineSimilarity(n+ 2).
References
Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C Stewart, Daniel
Rasmussen, Xuan Choo, Aaron R Voelker, and Chris Eliasmith. Nengo: a Python tool for
building large-scale functional brain models. Frontiers in neuroinformatics, 7, 2013.
Chris Eliasmith. How to build a brain: A neural architecture for biological cognition. Oxford
University Press, 2013.
Chris Eliasmith, Terrence C Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang,
and Daniel Rasmussen. A large-scale model of the functioning brain. science, 338(6111):1202–
1205, 2012.
Jan Gosmann and Chris Eliasmith. Optimizing semantic pointer representations for symbol-like
processing in spiking neural networks. PLOS ONE, 11(2):e0149928, 2016.
Radoslav Harman and Vladimír Lacko. On decompositional algorithms for uniform sampling from
n-spheres and n-balls. Journal of Multivariate Analysis, 101(10):2297–2304, 2010.
James Knight, Aaron R Voelker, Andrew Mundy, Chris Eliasmith, and Steve Furber. Efficient
SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework.
In The 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016.
Aaron R Voelker and Chris Eliasmith. Controlling the Semantic Pointer Architecture with deter-
ministic automata and adaptive symbolic associations. Technical report, Centre for Theoretical
Neuroscience, Waterloo, ON, 2014.
4We use “rectangular orthogonal” to mean PP|=Iin this case, or equivalently the rows of Pare orthonormal.
This transformation matrix can be understood as a change of basis followed by the deletion of two coordinates.
3
... We first simulated nD balls [21], [22] to evaluate our implementations of hyper-sphericity and hyper-SP across different dimensions and number of bins. We then evaluated our approach on different bootstrapped subsets of the public Iris dataset [20] to evaluate if the shape changes in substantial ways using different bin sizes. ...
... We first implemented an nD ball dropping algorithm [21], [22] in Python 3.6 [23] using the numpy, random, imageio, scipy.special and scipy.ndimage ...
Preprint
Full-text available
Shape metrics for objects in high dimensions remain sparse. Those that do exist, such as hyper-volume, remain limited to objects that are better understood such as Platonic solids and $n$-Cubes. Further, understanding objects of ill-defined shapes in higher dimensions is ambiguous at best. Past work does not provide a single number to give a qualitative understanding of an object. For example, the eigenvalues from principal component analysis results in $n$ metrics to describe the shape of an object. Therefore, we need a single number which can discriminate objects with different shape from one another. Previous work has developed shape metrics for specific dimensions such as two or three dimensions. However, there is an opportunity to develop metrics for any desired dimension. To that end, we present two new shape metrics for objects in a given number of dimensions: hyper-Sphericity and hyper-Shape Proportion (SP). We explore the proprieties of these metrics on a number of different shapes including $n$-balls. We then connect these metrics to applications of analyzing the shape of multidimensional data such as the popular Iris dataset.
... However such regions are strongly related geometrically; in fact in general the surface of an n-ball, i.e. an (n − 1)sphere, has a volume in (n − 1) dimensions. The angular distribution of volume in an n-sphere is identical to that in an (n − 1)-ball [41], therefore the PDF for the distribution of points on an (n − 1)-sphere is given by a normalisation of the same formula. ...
Preprint
Dimensionality reduction techniques map values from a high dimensional space to one with a lower dimension. The result is a space which requires less physical memory and has a faster distance calculation. These techniques are widely used where required properties of the reduced-dimension space give an acceptable accuracy with respect to the original space. Many such transforms have been described. They have been classified in two main groups: linear and topological. Linear methods such as Principal Component Analysis (PCA) and Random Projection (RP) define matrix-based transforms into a lower dimension of Euclidean space. Topological methods such as Multidimensional Scaling (MDS) attempt to preserve higher-level aspects such as the nearest-neighbour relation, and some may be applied to non-Euclidean spaces. Here, we introduce nSimplex Zen, a novel topological method of reducing dimensionality. Like MDS, it relies only upon pairwise distances measured in the original space. The use of distances, rather than coordinates, allows the technique to be applied to both Euclidean and other Hilbert spaces, including those governed by Cosine, Jensen-Shannon and Quadratic Form distances. We show that in almost all cases, due to geometric properties of high-dimensional spaces, our new technique gives better properties than others, especially with reduction to very low dimensions.
... We overcome this limitation with a non-uniform sampling strategy. Since the reachable workspace of the robot is restricted within a ball of radius centered at the origin, we sample the retraction degree of freedom equivalently to the radius component of a uniformly sampled ball in R 3 , which is simply 3 √ u where u ∼ U (0, 1) [35] (a derivation is provided in Appendix A). Fig. 3 demonstrates this sampling strategy. ...
Article
Full-text available
Tendon-driven robots, where one or more tendons under tension bend and manipulate a flexible backbone, can improve minimally invasive surgeries involving difficult-to-reach regions in the human body. Planning motions safely within constrained anatomical environments requires accuracy and efficiency in shape estimation and collision checking. Tendon robots that employ arbitrarily-routed tendons can achieve complex and interesting shapes, enabling them to travel to difficult-to-reach anatomical regions. Arbitrarily-routed tendon-driven robots have unintuitive nonlinear kinematics. Therefore, we envision clinicians leveraging an assistive interactive-rate motion planner to automatically generate collision-free trajectories to clinician-specified destinations during minimally-invasive surgical procedures. Standard motion-planning techniques cannot achieve interactive-rate motion planning with the current expensive tendon robot kinematic models. In this work, we present a 3-phase motion-planning system for arbitrarily-routed tendon-driven robots with a Precompute phase, a Load phase, and a Supervisory Control phase. Our system achieves an interactive rate by developing a fast kinematic model (over 1,000 times faster than current models), a fast voxel collision method (27.6 times faster than standard methods), and leveraging a precomputed roadmap of the entire robot workspace with pre-voxelized vertices and edges. In simulated experiments, we show that our motion-planning method achieves high tip-position accuracy and generates plans at 14.8 Hz on average in a segmented collapsed lung pleural space anatomical environment. Our results show that our method is 17,700 times faster than popular off-the-shelf motion planning algorithms with standard FK and collision detection approaches. Our open-source code is available online.
... Start with the north pole, or the top vertex of the sphere. If you draw a line from the north pole such that it intersects the sphere and the Cartesian plane at one point [4]. ...
Article
Full-text available
We consider a generalization of Stereographic Projections, to be called "Quasi-Stereographic" Projections. These refer to projections from compact Riemann surfaces to the a plane that intersects the surface. We will observe behavior of functions, such as polynomials, when this projection is applied. This includes approximating integrals of functions via projecting the function onto the Riemann Sphere. We will also briefly consider when the surface is not compactable and mention briefly about higher dimensional Quasi-Stereographic Projections .
Chapter
As neuromorphic hardware begins to emerge as a viable target platform for artificial intelligence (AI) applications, there is a need for tools and software that can effectively compile a variety of AI models onto such hardware. Nengo (http://nengo.ai) is an ecosystem of software designed to fill this need with a suite of tools for creating, training, deploying, and visualizing neural networks for various hardware backends, including CPUs, GPUs, FPGAs, microcontrollers, and neuromorphic hardware. While backpropagation-based methods are powerful and fully supported in Nengo, there is also a need for frameworks that are capable of efficiently mapping dynamical systems onto such hardware while best utilizing its computational resources. The neural engineering framework (NEF) is one such method that is supported by Nengo. Most prominently, Nengo and the NEF have been used to engineer the world’s largest functional model of the human brain. In addition, as a particularly efficient approach to training neural networks for neuromorphics, the NEF has been ported to several neuromorphic platforms. In this chapter, we discuss the mathematical foundations of the NEF and a number of its extensions and review several recent applications that use Nengo to build models for neuromorphic hardware. We focus in-depth on a particular class of dynamic neural networks, Legendre Memory Units (LMUs), which have demonstrated advantages over state-of-the-art approaches in deep learning with respect to energy efficiency, training time, and accuracy.
Preprint
Full-text available
Tendon-driven robots, where one or more tendons under tension bend and manipulate a flexible backbone, can improve minimally invasive surgeries involving difficult-to-reach regions in the human body. Planning motions safely within constrained anatomical environments requires accuracy and efficiency in shape estimation and collision checking. Tendon robots that employ arbitrarily-routed tendons can achieve complex and interesting shapes, enabling them to travel to difficult-to-reach anatomical regions. Arbitrarily-routed tendon-driven robots have unintuitive nonlinear kinematics. Therefore, we envision clinicians leveraging an assistive interactive-rate motion planner to automatically generate collision-free trajectories to clinician-specified destinations during minimally-invasive surgical procedures. Standard motion-planning techniques cannot achieve interactive-rate motion planning with the current expensive tendon robot kinematic models. In this work, we present a 3-phase motion-planning system for arbitrarily-routed tendon-driven robots with a Precompute phase, a Load phase, and a Supervisory Control phase. Our system achieves an interactive rate by developing a fast kinematic model (over 1,000 times faster than current models), a fast voxel collision method (27.6 times faster than standard methods), and leveraging a precomputed roadmap of the entire robot workspace with pre-voxelized vertices and edges. In simulated experiments, we show that our motion-planning method achieves high tip-position accuracy and generates plans at 14.8 Hz on average in a segmented collapsed lung pleural space anatomical environment. Our results show that our method is 17,700 times faster than popular off-the-shelf motion planning algorithms with standard FK and collision detection approaches. Our open-source code is available online.
Article
Full-text available
Various numerical methods have been extensively studied and used for reliability analysis over the past several decades. However, how to understand the effect of numerical uncertainty (i.e., numerical error due to the discretization of the performance function) on the failure probability is still a challenging issue. The active learning probabilistic integration (ALPI) method offers a principled approach to quantify, propagate and reduce the numerical uncertainty via computation within a Bayesian framework, which has not been fully investigated in context of probabilistic reliability analysis. In this study, a novel method termed `Parallel Adaptive Bayesian Quadrature' (PABQ) is proposed on the theoretical basis of ALPI, and is aimed at broadening its scope of application. First, the Monte Carlo method used in ALPI is replaced with an importance ball sampling technique so as to reduce the sample size that is needed for rare failure event estimation. Second, a multi-point selection criterion is proposed to enable parallel distributed processing. Four numerical examples are studied to demonstrate the effectiveness and efficiency of the proposed method. It is shown that PABQ can effectively assess small failure probabilities (e.g., as low as 10^{-7}) with a minimum number of iterations by taking advantage of parallel computing.
Preprint
Full-text available
Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than \$10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effects on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
Chapter
Similarity search using metric indexing techniques is largely a solved problem in low-dimensional spaces. However techniques based only on the triangle inequality property start to fail as dimensionality increases. Since proper metric spaces allow a finite projection of any three objects into a 2D Euclidean space, the notion of angle can be validly applied among any three (but no more) objects. High dimensionality is known to have interesting effects on angles in vector spaces, but to our knowledge this has not been studied in more general metric spaces. Here, we consider the use of angles among objects in combination with distances. As dimensionality becomes higher, we show that the variance in sampled angles reduces. Furthermore, sampled angles also become correlated with inter-object distances, giving different distributions between query solutions and non-solutions. We show the theoretical underpinnings of this observation in unbounded high-dimensional Euclidean spaces, and then examine how the pure property is reflected in some real-world high dimensional spaces. Our experiments on both generated and real world datasets demonstrate that these observations can have an important impact on the tractability of search as dimensionality increases.
Technical Report
Full-text available
Building upon previous work on human-scale structured representations , we describe a novel and biologically plausible unsupervised Hebbian learning rule, and use it to implement a robust, scalable, and adaptive heteroassociative memory with the Neural Engineering Framework (NEF). We analyze the rule to successfully predict its behaviour under various parameterizations. Next, the Semantic Pointer Architecture (SPA), which utilized the NEF to build the world's largest functional simulation of the human brain, is extended to support control flow specified by arbitrary deterministic automata. This involves the design of a novel "doubly-latched integrator" that implements a well-behaved working memory unit within the context of the SPA. By combining these extensions with heteroassociative memories, we obtain a system that is as computationally powerful as a multi-head deterministic pushdown automata, which is known to be Turing complete. The power and flexibility of this approach is demonstrated by specifying a toy automata that performs a breadth-first search on an arbitrary directed graph, given by a heteroassociative memory encoding its adjacency list, without any additional knowledge of the possible vertices or edges.
Conference Paper
Full-text available
The biological brain is a highly plastic system within which the efficacy and structure of synaptic connections are constantly changing in response to internal and external stimuli. While numerous models of this plastic behavior exist at various levels of abstraction, how these mechanisms allow the brain to learn meaningful values is unclear. The Neural Engineering Framework (NEF) is a hypothesis about how large-scale neural systems represent values using populations of spiking neurons, and transform them using functions implemented by the synaptic weights between populations. By exploiting the fact that these connection weight matrices are factorable, we have recently shown that static NEF models can be simulated very efficiently using the SpiNNaker neuromorphic architecture. In this paper, we demonstrate how this approach can be extended to efficiently support both supervised and unsupervised learning rules designed to operate on these factored matrices. We then present a heteroassociative memory architecture built using these learning rules and prove that it is capable of learning a human-scale semantic network. Finally we demonstrate a 100 000 neuron version of this architecture running on the SpiNNaker simulator with a speed-up exceeding 150x when compared to the Nengo reference simulator.
Article
Full-text available
The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.
Article
Full-text available
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Article
Full-text available
Modeling the Brain Neurons are pretty complicated cells. They display an endless variety of shapes that sprout highly variable numbers of axons and dendrites; they sport time- and voltage-dependent ion channels along with an impressive array of neurotransmitter receptors; and they connect intimately with near neighbors as well as former neighbors who have since moved away. Simulating a sizeable chunk of brain tissue has recently become achievable, thanks to advances in computer hardware and software. Eliasmith et al. (p. 1202 ; see the Perspective by Machens ) present their million-neuron model of the brain and show that it can recognize numerals, remember lists of digits, and write down those lists—tasks that seem effortless for a human but that encompass the triad of perception, cognition, and behavior.
Article
We describe a universal conditional distribution method for uniform sampling from n-spheres and n-balls, based on properties of a family of radially symmetric multivariate distributions. The method provides us with a unifying view on several known algorithms as well as enabling us to construct novel variants. We give a numerical comparison of the known and newly proposed algorithms for dimensions 5, 6 and 7.
A large-scale model of the functioning brain
  • Chris Eliasmith
  • C Terrence
  • Xuan Stewart
  • Trevor Choo
  • Travis Bekolay
  • Yichuan Dewolf
  • Daniel Tang
  • Rasmussen
Chris Eliasmith, Terrence C Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang, and Daniel Rasmussen. A large-scale model of the functioning brain. science, 338(6111):1202-1205, 2012.