Page 1

arXiv:0910.2178v1 [astro-ph.CO] 12 Oct 2009

Statistical Properties of the Spatial Distribution of

Galaxies

N. Yu. Lovyagin1

St.-Petersburg State University, Universitetskij pr. 28, St.-Petersburg, 198504 Russia

Astrophysical Bulletin, 2009, Vol. 64, No. 3, pp. 217–228.

The original publication is available at www.springerlink.com:

http://www.springerlink.com/content/m04u0814r17065l1.

Abstract

The methods of determining the fractal dimension and irregularity scale in simulated

galaxy catalogs and the application of these methods to the data of the 2dF and 6dF cat-

alogs are analyzed. Correlation methods are shown to be correctly applicable to fractal

structures only at the scale lengths from several average distances between the galaxies,

and up to (10–20)% of the radius of the largest sphere that fits completely inside the

sample domain. Earlier the correlation methods were believed to be applicable up to the

entire radius of the sphere and the researchers did not take the above restriction into

account while finding the scale length corresponding to the transition to a uniform distri-

bution. When an empirical formula is applied for approximating the radial distributions

in the samples confined by the limiting apparent magnitude, the deviation of the true

radial distribution from the approximating formula (but not the parameters of the best

approximation) correlate with fractal dimension. An analysis of the 2dF catalog yields a

fractal dimension of 2.20±0.25 on scale lengths from 2 to 20 Mpc, whereas no conclusive

estimates can be derived by applying the conditional density method for larger scales due

to the inherent biases of the method. An analysis of the radial distributions of galaxies

in the 2dF and 6dF catalogs revealed significant irregularities on scale lengths of up to

70 Mpc. The magnitudes and sizes of these irregularities are consistent with the fractal

dimension estimate of D = 2.1–2.4

1.INTRODUCTION

The spatial distribution of galaxies bears signatures of both the initial conditions in the

early Universe and the evolution of the primordial density perturbations. An analysis of various

galaxy samples performed using the two-point correlation function showed that this function

has a power-law form ξ(r) = (r0/r)γon scale lengths ranging from 0.01 to 10 Mpc (hereafter we

adopt a Hubble constant of H0= 100km/s/Mpc) with a slope of γ = 1.77 and the parameter

r0= 5 Mpc [1]. It has long been considered that the scale of the r0parameter is the typical

irregularity scale length, and the distribution of galaxies becomes uniform starting from the

scale length of r0 = 5 Mpc. However, the discovery of structures with the scale lengths of

several tens and hundreds Mpc [2] in recent surveys has cast doubt upon this hypothesis.

In this context, the problems of applicability limits and reliability of the correlation methods

of the analysis of spatial distribution of galaxies, and finding new methods for describing large

and very large structures acquire special importance.

At present, two kinds of data on the galaxy redshifts are of great importance.

1E-mail: lovyagin@mail.com

1

Page 2

• The first kind are the redshift catalogs covering large areas (solid angles) of the sky, but

limited to small redshifts (up to z ? 0.5) (2dF, 6dF, SDSS, etc.). Such catalogs can be

analyzed via applying the correlation methods to determine the fractal dimension.

• The second kind is represented by the deepfield catalogs of photometric redshifts. Such

studies cover small solid angles (of the order of 1◦×1◦), but extend to much larger redshifts

z > 1 (up to 6) (COSMOS, HDF, HUDF, FDF and others). Correlation methods are

difficult to apply to such catalogs due to the small radius of the largest sphere that fits

entirely inside the small solid angle considered.

However, both kinds of catalogs can be used to analyze the radial distribution of galaxies,

built upon a sample confined by the limiting apparent magnitude. This method not only

removes the restriction on the size of the largest sphere thereby significantly increasing the

attainable research scale lengths, but it can also be applied to all galaxies in the catalog and

not only to those in a volume-limited sample thereby increasing the number of objects studied.

An analysis of fluctuations in the radial distribution of galaxies can be used to determine both

the sizes and the amplitudes of the largest structures in the galaxy sample considered.

In this paper we analyze two methods of statistical analysis of structures—a determination

of the fractal dimension, and an analysis of radial distributions. Despite the fact that our

analysis is limited to the 2dF and 6dF catalogs, we constructed our simulated lists with two

kinds of catalogs (covering large and small solid angles on the sky).

In this paper we make use of our own software, developed to simulate three-dimensional

catalogs of galaxies and to perform statistical analysis of both real and simulated samples. It

is a C++ library of functions (so far, without a user interface). We are currently preparing

its description, which will be made available, along with the source code, at our web site. The

software covers a somewhat broader scope of problems than that described in this paper, and

will be a basis for a future package meant for comprehensive statistical analysis of the spatial

distribution of galaxies.

2.METHODS USED TO ANALYZE THE STRUCTURES

2.1. Estimating the Fractal Dimension

Fractal dimension is estimated using the method of conditional density in spheres (the total

correlation function in spheres). The definitions of the total and reduced correlation functions

and a detailed description of their properties can be found in [2]. We chose the method of

conditional density in spheres for the reasons stated by Vasil’ev [3]. He showed that this method

is, on the one hand, sufficiently fast (compared to the method of cylinders), and, on the other

hand, sufficiently accurate (the conditional density in spheres is, unlike the conditional density

in shells, less subject to fluctuations) and, moreover, it can be applied to fractal structures

(unlike the method of reduced two-point correlation function, which is built assuming uniform

distribution inside the sample).

The idea of the method consists of constructing a dependence of the number of points N(r)

inside a sphere of radius r, averaged over spheres centered on all the points of the set. Only

a portion of the set is considered, therefore the averaging should be performed only over the

spheres that fit completely inside the set. The dimension is computed by the conditional number

2

Page 3

density2n(r) = N(r)/(4/3πr3) in logarithmic coordinates, where the slope of the line must be

equal to the fractal dimension D minus three, because the expected behavior is n(r) ∝ rD−3.

2.2.Analysis of Radial Distributions

Radial distribution is such a dependence N(z), that

dN(z,dz) = N(z)dz,(1)

where dN is the number of galaxies with redshifts between z and z + dz. The construction of

such a distribution involves counting the number of galaxies ∆N(z,∆z) inside a spherical shell

of thickness ∆z, with midradius lying at the distance corresponding to redshift z, i.e., formula

(1) transforms into

∆N(z,∆z) = N(z)∆z.

Thus, the N(z) distribution can be built in bins with a certain chosen step in ∆z. Traditionally,

the ∆N(z,∆z) variable—the number of galaxies in shells—is plotted on the curves of radial

distribution.

For magnitude-limited catalogs the radial distribution N(z) is approximated by the following

empirical formula (see, e.g., [4, 5]):

N(z) = Azγexp

?

−

?z

zc

?α?

.(2)

Here the three parameters γ, zcand α are independent from each other and A is the normalizing

factor (the integral of radial distribution is normalized to the total number of galaxies in the

sample):

∞

?

00

N(z)dz =

∞

?

Azγexp

?

−

?z

zc

?α?

dz =Azγ+1

c

Γ?γ+1

α

α

?

= N,(3)

where N is the total number of galaxies and Γ(x) is the (complete) Euler Gamma-function.

However, it is impossible, when searching for the best approximation of the radial distribution,

to compute the A (3); due to the fluctuations we have to search for it in the interval from

A −√A to A +√A.

The approximation is performed via the least squares method, i.e., one must search for

the parameter values that minimize the sum of squared residuals. The classical least squares

method cannot be applied as the approximating function is not linear in parameters. However,

a “straightforward” minimization using the fastest (gradient) descent method is also extremely

inefficient, as the minimum is indistinct and it may take a computer several days to several

months to find it. That is why we employ the grid search method, where the grid mesh and

search domain are reduced at each successive iteration.

After finding the best-fit parameters, the domains of irregularities are identified on the curve

of relative fluctuations:

σN=Nobs− Ntheor

Ntheor

,(4)

2Terms “density” and “concentration” are synonyms in this sense, since the concentration is the density of

point sources with the unit mass.

3

Page 4

where

Nobs= N(zi,∆z),

Ntheor= Azγexp

?

−

?z

zc

?α?????

z=zi

.

We can thus interpret any fluctuation exceeding the Poisson noise level of σN > 3σp, as a

structure, where3σp= 1/√Ntheor, because in a fractal distribution the characteristic fluctuation

is increased by σξ, which can be computed based on the value of the two-point correlation

function ξ(r):

1

V2

VV

σ2

ξ=

?

dfV1

?

dfV2ξ(|r1− r2|),

where V is the volume of the set [6, 7, 8].

3.CATALOGS USED

3.1.The 2dF Catalog

The 2dF catalog 2dF [9], or, more precisely, its 2dFGRS subsample, which includes the data

on the redshifts of galaxies, contains a total of 245591 objects, of which about 220 thousand

have sufficiently accurately measured redshifts. The magnitude limits in the J-band, corrected

for the Galactic extinction, are 14.0 < mJ< 19.45. Most of the galaxies have redshifts z < 0.3.

The catalog is available at http://magnum.anu.edu.au/~TDFgg.

The galaxies of the catalog concentrate in the sky in two continuous strips extending along

the right ascension, and in randomly scattered small areas. About 140 thousand galaxies are

located in the Southern strip, and about 70 thousand galaxies, in the Northern strip.

3.2. The 6dF Catalog

The 6dFGS catalog is an all-sky spectroscopic survey at Galactic latitudes |b| > 10◦[10,

11, 12]. Observations began in 2003 and were made using a multichannel spectrograph (they

have not yet been completed at the time of writing this paper). The catalog is available

at http://www-wfau.roe.ac.uk/6dFGS. In this paper we use the second data release of the

catalog, which contains 83014 galaxies with known equatorial coordinates. Of these, 71627

objects have sufficiently reliably determined redshifts. The survey has been completed in three

sky areas. In this paper we use a sample of galaxies with known R-band magnitudes.

4. SIMULATED GALAXY CATALOGS

To test the reliability and accuracy, and to identify the applicability limits of the methods,

they must be applied to simulated catalogs. To this end, we generate catalogs that simulate not

only the spatial distribution of galaxies (uniform and fractal), but also the distribution of their

absolute magnitudes (i.e., the luminosity function of galaxies). Such catalogs can be subjected

to both the correlation analysis (determination of the fractal dimension) in a volume-limited

3Here we use Ntheorand not Nobs, because the latter may be equal to zero.

4

Page 5

sample in a large solid angle, and to the analysis of the radial distribution in a magnitude-

limited sample either in a large or in a small solid angle.

Moreover, we use the MersenneTwister pseudorandom number generator to generate random

quantities (space positions and absolute magnitudes of galaxies). This generator, unlike the

standard linear congruent generator, produces far less correlated numbers and it is considered

suitable for the use of Monte-Carlo method [13].

In this paper we analyze a fractal model of the real distribution of galaxies parametrized by

the fractal dimension and the parameters of the luminosity function. This model describes the

power-law nature of the observed correlations of the distribution of galaxies in real catalogs.

4.1.Spatial Distribution of Galaxies

We use three models of the spatial distribution of galaxies.

Uniform distribution. The coordinates of each point of the set are generated as three random

numbers uniformly distributed in the [0,1] interval (and hence the entire set is contained

in the [0,1] × [0,1] × [0,1] cube).

Cantor dust (more precisely, its generalization to the three-dimensional case). The zero gen-

eration of this set coincides with the [0,1] × [0,1] × [0,1] cube. Each edge of the cube

is then subdivided into m equal parts, i.e., the entire cube is subdivided into m3identi-

cal subcubes, and for each such subcube the probability p of its “survival” in the next

generation is defined. The next generation consists of the set of “surviving” subcubes,

and the algorithm is then reiterated for each such subcube. The final set is the limit

obtained as the number of the generation becomes infinite: in each generation the edge

of the cube becomes shorter by a factor of m and tends to 0 as

contract to points. In case of a real distribution the process should be terminated at a

certain generation n. A point is chosen inside each of the subcubes “surviving” in the

last generation. The coordinates of this point are random numbers uniformly distributed

along the projections of the edges of the subcube onto the coordinate axes.

1

mn, , i.e., the subcubes

The theoretical dimension of such a set is known to be given by the formula

D = logm(pm3).

In our case we use the given dimension D to compute the probability p = mD−3.

Gaussian random walk and its generalization with the possibility of generating sets of 2 ?

D ? 3. dimension. The first point coincides with the coordinate origin (0,0,0). In

the classical case each successive point is obtained from the previous point by adding

to its every coordinate a normally distributed random number with zero mean and unit

variance.

The generalization that we propose here for the first time consists of the following: at

each stage we generate two points instead of one with a certain probability w. A more

accurate description of the algorithm uses the term “generation”. The zero generation

coincides with the coordinate origin (0,0,0). Every next generation is obtained from the

previous generation in accordance with the following rule: for each point of the previous

generation one or two points of the new generation are generated, like in the classical

case, by adding normally distributed random numbers to the coordinates of the previous

5