Article

A model for income distribution

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Broadly speaking, economists have followed two distinct methodologies, which we might call the probabilistic approach and the deterministic approach. The probabilistic approach has a rich tradition in economics that studies the distributional properties of income in order infer the (stochastic) generating mechanisms without necessarily invoking the standard economic doctrines of general equilibrium theory (see for example [9,26,29]). The deterministic approach ambitions to explain variation of income without abandoning core economic tenets and argues against purely probabilistic approach due to the perceived lack of economic content (see [27] for a more in-depth discussion). ...
... The obvious consequence is that the distribution of incomes from scaleable jobs is a power-law. The examples given in [50] rest on what economists call network externalities (the "superstar" effect) or the kind of hierarchies described by [9,8], or some combination of the two. 5 Economists have been describing the rise of the "supermanager" and discussing the role of the closed circuit of corporate boards (e.g. ...
... The maximum entropy distribution produces the least biased estimate (or prediction) of the system and is equivalent to our state of knowledge about the system. 9 The difficult part of applying the principle of maximum entropy to social and economic systems is to find the constraints that parsimoniously express the theory relevant to the problem. [41,42] have demonstrated the usefulness of this approach for modeling the distribution of the return on assets, but research remains to be done into the identification of the relevant constraints giving rise to the distribution of income. ...
Article
Full-text available
We offer a brief review of the use of distributional mixture models with a finite number of components for the study of the distribution of income. In general, finite mixture models find a number of applications across fields, but they usually arise from theoretical considerations. Application to the distribution of income present a joint inference about the number and types of components to include in a mixture, corresponding to how different income generating mechanisms’ statistical signatures are represented in the observed data. Many of the contributions in this area rest on an implicit (and sometimes explicit) information theoretic approach to this inference problem. Our review concludes with new illustrative findings from the US based on restricted-access Census data.
... Our methodology is applicable to a wide range of settings and we employ it to examine labor income dynamics. This is a well studied topic, starting with Champernowne (1953), Hart (1976), Shorrocks (1976) and Lillard and Willis (1978), among others. We apply our model to the Panel Study of Income Dynamics (PSID) data to perform experiments corresponding to various counterfactual analyses. ...
... Our analysis of income mobility and persistence relies on a representation of the model as a discrete Markov chain when labor income is treated as discrete. Champernowne (1953) and Shorrocks (1976) previously used Markov chain representations of the labor income process to analyze the same issue. We allow unrestricted heterogeneity across workers by estimating a separate Markov chain for each worker. 2 Finally, Hirano (2002) and Gu and Koenker (2017) estimated autoregressive labor income processes using flexible semiparametric Bayesian methods. ...
Preprint
Full-text available
We consider estimation of a dynamic distribution regression panel data model with heterogeneous coefficients across units. The objects of interest are functionals of these coefficients including linear projections on unit level covariates. We also consider predicted actual and stationary distributions of the outcome variable. We investigate how changes in initial conditions or covariate values affect these objects. Coefficients and their functionals are estimated via fixed effect methods, which are debiased to deal with the incidental parameter problem. We propose a cross-sectional bootstrap method for uniformly valid inference on function-valued objects. This avoids coefficient re-estimation and is shown to be consistent for a large class of data generating processes. We employ PSID annual labor income data to illustrate various important empirical issues we can address. We first predict the impact of a reduction in income on future income via hypothetical tax policies. Second, we examine the impact on the distribution of labor income from increasing the education level of a chosen group of workers. Finally, we demonstrate the existence of heterogeneity in income mobility, which leads to substantial variation in individuals' incidences to be trapped in poverty. We also provide simulation evidence confirming that our procedures work.
... As is well-known, Pareto observed that the distribution of the number of individuals with income above a specified level of wealth presented a strange regularity. In 1953, Champernowe [15] proposed a model according to which a taxpayer's annual income was a function of the preceding year's income and a stochastic factor. Gibrat [16] used data on company size as a variable to study the proportional effects leading to lognormal distributions. ...
... In the proximity of these abscissa points, the slope of the accumulated curve changes quickly. The curve that fits the values observed in the range [10,13] does not fit the range [15,20], and the curve that fits this range does not fit the range of big fortunes. The observed curve could be considered as the "envelope" of the curves of different subpopulations. ...
Article
Full-text available
In this research, we used Spanish wealth distribution microdata for the period 2015–2020 to provide a general framework for comparing different models and explaining different empirical datasets related to wealth distribution. We present a methodology to output the current value of assets and participations held by the population in order to calculate their real and current distribution. We propose a new methodology for mixture analysis, whereby we identify and analyze subpopulations and then go on to study their influence on wealth distribution. We use concepts of symmetry to identify two internal processes that are characteristic of the wealth accumulation process for the subpopulations of entrepreneurs and non-entrepreneurs. Finally, we propose a method to adjust these results to other empirical data in other countries and periods, providing a methodology for comparing results output with differing data granularity.
... where and are quantifiable traits, the parameter is nominated as the allometric exponent and is recognized as the normalization constant. This model also known as the equation of simple allometry has been widely used in research problems in many fields including, biology [25,35,36], biomedical sciences [37][38][39][40][41], economics [42][43][44][45][46], earth and planetary sciences [47][48][49][50][51], resource management and conservation [52][53][54][55][56]. The interest for this model lies essentially in its practical utility to produce surrogates of a response , that is difficult to measure in direct way, by using estimates of the parameters and and easily gotten measurements of a covariate . ...
... (cf. Equation (42) through Equation (44)). ...
Article
Full-text available
(1) Background: We previously demonstrated that customary regression protocols for curvature in geometrical space all derive from a generalized model of complex allometry combining scaling parameters expressing as continuous functions of covariate. Results highlighted the relevance of addressing suitable complexity in enhancing the accuracy of allometric surrogates of plant biomass units. Nevertheless, examination was circumscribed to particular characterizations of the generalized model. Here we address the general identification problem. (2) Methods: We first suggest a log-scales protocol composing a mixture of linear models weighted by exponential powers. Alternatively, adopting an operating regime-based modeling slant we offer mixture regression or Takagi–Sugeno–Kang arrangements. This last approach allows polyphasic identification in direct scales. A derived index measures the extent on what complexity in arithmetic space drives curvature in arithmetical space. (3) Results: Fits on real and simulated data produced proxies of outstanding reproducibility strength indistinctly of data scales. (4) Conclusions: Presented analytical constructs are expected to grant efficient allometric projection of plant biomass units and also for the general settings of allometric examination. A traditional perspective deems log-transformation and allometry inseparable. Recent views assert that this leads to biased results. The present examination suggests this controversy can be resolved by addressing adequately the complexity of geometrical space protocols
... This paper is related to the large number of papers that propose statistical explanations more developed countries experience less volatile shocks. of Zipf's Law. Including most notably Champernowne (1953), Kalecki (1945), Levy and Solomon (1996), Malcai, Biham and Solomon (1999), Gabaix (1999a), Blank and Solomon (2000) and Cordoba (2004), these papers have focused on the role of Gibrat's Law of city growth in producing a size distribution of cities that satisfies Zipf's Law for an economy in which there is a fixed number of cities. Much attention has also been devoted to the role of various frictions, or lower bounds, on city sizes in ensuring a thick lower tail of the distribution of city sizes. ...
... These property developers serve to internalize the local production externality, and guarantee that market outcomes are efficient. 5 Throughout, we assume log-linear preferences and Cobb-Douglas production functions so that we can solve for the growth path of cities in closed form. The advantage of closed-form solutions is that it allows us to make analytical statements about the evolution of individual cities, as well as about the long-run size distribution of cities. ...
... Domar and Musgrave (1944), and later Stiglitz (1969) and Cowell (1975) show that the impact of taxation on risk-taking, and therefore on inequality, is far from a simple negative correlation. On intergenerational income distribution under Markov process, Champernowne (1953) shows that if each one of a population of identical agents bore an independent idiosyncratic risk proportional to its wealth, then in the long run income would approximate a Pareto distribution. Banerjee and Newman (1991) prove that in the incomplete credit market, regardless of how wealth is distributed, its distribution over time will be ergodic, meaning that a lineage will experience all levels of wealth in the interim: the descendants of the rich could eventually become poor, and vice versa. ...
... It has been observed widely that the top income class follows a Pareto distribution (Mandelbrot, 1960;Dragulescu and Yakovenko, 2001;Nirei and Souma, 2007;Atkinson et al., 2011;Tao et al., 2019). A large body of literature has attributed the cause of the Pareto distribution to the Matthew effect of income accumulation (Champernowne, 1953;Wold and Whittle, 1957;Dutta and Michel, 1998;Lux and Marchesi, 1999;Reed, 2001;Nirei and Souma, 2007;Benhabib et al., 2011;Malevergne et al., 2013). However, the singular focus on the top income class of households overlooks the component of earnings' inequality that is arguably most consequential for the low and middle income classes of citizens (Autor, 2014). ...
Preprint
We show that an exponential income distribution will emerge spontaneously in a peer-to-peer economic network that shares the publicly available technology. Based on this finding, we identify the exponential income distribution as the benchmark structure of the well-functioning market economy. However, a real market economy may deviate from the well-functioning market economy. We show that the deviation is partly reflected as the invalidity of exponential distribution in describing the super-low income class that involves unemployment. In this regard, we find, theoretically, that the lower-bound u of exponential income distribution has a linear relationship with (per capita) unemployment compensation. In this paper, we test this relationship for the United Kingdom from 2001 to 2015. Our empirical investigation confirms that the income structure of low and middle classes (about 90% of populations) in the United Kingdom exactly obeys exponential distribution, in which the lower-bound u is exactly in line with the evolution of unemployment compensation.
... Stochastic models with multiplicative noise applied to income and wealth dynamics have a long history in economics, with an early major publication in 1953 by Champernowne [14], and since then have been applied extensively and are summarised in several reviews, see for example [20,6]. These models have been used as they exhibit power-law tails, which is a key feature of both income and wealth distributions. ...
Preprint
We study the wealth distribution of UK households through a detailed analysis of data from wealth surveys and rich lists, and propose a non-linear Kesten process to model the dynamics of household wealth. The main features of our model are that we focus on wealth growth and disregard exchange, and that the rate of return on wealth is increasing with wealth. The linear case with wealth-independent return rate has been well studied, leading to a log-normal wealth distribution in the long time limit which is essentially independent of initial conditions. We find through theoretical analysis and simulations that the non-linearity in our model leads to more realistic power-law tails, and can explain an apparent two-tailed structure in the empirical wealth distribution of the UK and other countries. Other realistic features of our model include an increase in inequality over time, and a stronger dependence on initial conditions compared to linear models.
... Beyond the Pareto family proper, other distribution functions have top tails decaying at a similar rate as the Pareto. Among these related distributions, the Champernowne (1937Champernowne ( , 1952Champernowne ( , 1953Champernowne ( , 1973 family has been accepted most widely for modeling incomes, particularly the 5-parameter Champernowne 1 and the 4-parameter Champernowne 2 (Ord, 1975). ...
Article
Full-text available
Empirical distributions of top incomes suffer from statistical problems affecting the measurement of inequality and its trend. Researchers and practitioners have been increasingly noting parametric regularities across income distributions and turning to parametric functions to approximate or supplement the observed distributions, both for descriptive purposes and for correcting distributional statistics derived from data. The proliferation of distinct branches of modeling literature has highlighted the need to compare the alternative modeling options, and develop systematic tools to discriminate between them. This paper reviews the state of methodological and empirical knowledge regarding the adoptable parametric functions, and lists references and statistical programs allowing practitioners to apply these models to microdata in household surveys and administrative registers, or grouped‐records data from national accounts statistics. Implications for modeling the distributions of other economic outcomes including consumption and wealth, and incomes below the topmost tail, are drawn. For incomes, a handful of distribution functions hold promise for modeling the top tails based on theoretical and empirical properties—namely the extreme values distributions, the generalized Pareto, the Singh–Maddala and the generalized beta type 2. Understanding these functions in relation to other commonly invoked alternatives is a contribution of this review.
... Thus, not only does this simple model offer a plausible explanation of the Pareto Law of Incomes (upper tail), it also predicts power-law behaviour in the lower tail. In fact, lower-tail power-law behaviour has been identified before (Champernowne, 1953). Furthermore, Reed (2001) gave other examples, outside of economics, for which a similar explanation might hold such as the body-size distribution of animal species (May, 1988). ...
Article
Different skew models, such as the lognormal and the Pareto functions, have been proposed as suitable descriptions of income distribution. Specific distributions are usually applied in empirical investigations. It is a common opinion that the Pareto curve often provides an adequate description of higher incomes. Recently, double Pareto distributions that obey the power law in both the upper and lower tails have been suggested to reflect a general distribution of personal income. In this study, the literature concerning double Pareto models is presented and the model is applied to Finnish income data. JEL classification numbers: I32. Keywords: Maximum likelihood estimate, Method of moments, Bayesian method, Mean Squared Error, Lognormal, double Pareto, Coefficient of determination, survival function, Geometric Brownian motion.
... The afore given model also identifies as the equation of simple allometry. It is of widespread use in research problems in biology (Huxley 1932;Lu et al. 2016;Savage et al. 2004;Myhrvold 2016;West and Brown 2005), biomedical sciences (Mayhew 2009;Paul et al. 2014;Moore et al. 2011;Eleveld et al. 2017;Kwak et al. 2016) economics (Champernowne 1953;Samaniego and Moses 2008;Wang et al. 2014;Coccia 2018;William 1979), earth and planetary sciences (Neukum et al. 1994;Maritan et al. 2002;Liu et al. 2018;Wolinsky et al. 2010;Bull 1975;Newman 2007;Naschie 2004;Ji-Huan and Jun-Fang 2009;Dreyer 2001;Pouliquen 1999) resource management and conservation (Zeng and Tang 2011a;De Robertis and Williams 2008;Rodríguez et al. 2017;Ofstad et al. 2016;Sutherland et al. 2000;Echavarria-Heras et al. 2018;Solana-Arellano et al. 2014;Montesinos-López et al. 2018), among other fields. A prevalent device to obtain estimates of the parameters α and β relies on the logarithmic transformation of the original data to convey a linear regression model in log-scales. ...
Chapter
Examining sigmoidal allometries in geometrical space can be carried away by direct nonlinear regression or generalized additive modeling approaches. Nevertheless, producing consistent estimates of breakpoints characterizing phases composing sigmoidal heterogeneity could be problematic. Here, we explain how the paradigm of weighted multiple–phase allometries embraced by the mixture structure of the total output of a first-order Takagi–Sugeno–Kang fuzzy model can carry on this task in a direct, intuitive and efficient way. Present calibration tasks relied on log-transformed amniote testes mass allometry data. The considered TSK fuzzy model approach not only offers a way to back the assumption that analyzed testes mass allometry is sigmoidal in geometrical space but beyond this, it provided meaningful estimates for transition among involved phases. Results confirm previously raised views on the superior capabilities of the addressed fuzzy approach to validating prior subjective knowledge in allometry.
... Proportional random growth is deemed as a central mechanism for explaining power-law distributions. Its origin dates to the work of Yule (1925) and later Simon (1955) as well as Champernowne (1953). Random growth has primarily been investigated in the physics and economics literature (Gabaix, 2009). ...
Thesis
Full-text available
This thesis sets out to explore the nature of large individual outliers in terms of fame, success, or recognition. These outliers, colloquially referred to as “superstars”, differ from the general population in that they are able to capture the vast majority of outcomes in a given scenario. These outcomes can be anything from money, over citations in scientific papers, to public recognition (or “fame”). Although there are many theories and assumptions about what causes stardom; in addition to many intuitively logical explanations; little is in fact known from a scientific perspective. Data from many different scientific fields and disciplines suggest that there is no clear-cut relationship between various measures of inputs (e.g. talent, skill, performance) and outcomes (e.g. fame, recognition, success). Research specifically pertinent to “performance, success, and stardom” is not at all conclusive about the relationship between factors of performance and factors of success. Fame and success appear to be virtually random and might be completely unpredictable. Consequently, our current understanding of the dynamics at play and our research methods appear inadequate. Here, network theory provides a novel approach to gain insights into how these vague concepts are intertwined. The phenomenon of superstars (i.e. large individual outliers) appears to not be an exception. In fact, skewed distributions (e.g. a “superstar” that is able to capture most of the public attention in a specific domain) might even be a fundamental part of the reality we live in. Power-law distributions describe systems in which very few individual outliers account for almost all of the outcomes. These distributions are found everywhere from natural to purely manmade systems and appear to permeate our universe. There is much speculation about the mechanisms that might cause these universal distributions. Some of the most widely acknowledged ones are “cumulative advantage”, “preferential attachment”, and “self- organized criticality”. This thesis provides a broad overview of past and contemporary scientific findings about the phenomenon of superstars and its root causes. Discoveries from various different scientific fields converge towards a few key insights.
... Mitzenmacher (2004) provides a comprehensive review of the various explanations that have been given over the years for the apparent prevalence of power-law (and lognormal) distributions in empirical data. He identifies three families of generative models for power-law distributions, each of which received particular attention in the 1950s before their later rediscovery: preferential attachment models (see, e.g., Simon, 1955), optimization models (see, e.g., Mandelbrot, 1953), and multiplicative process models (see, e.g., Champernowne, 1953). Almost half a century before Mitzenmacher's review, Herdan disputes the assumption that large numbers of heavy-tailed distributions can be explained by the same model: " Simon's claim [in Simon, 1955] to have provided a uniform mathematical explanation of these distributions rests upon an insufficient realization of the differences in form between the distributions, and suffers from a neglect of considering the relations between some of them which makes it highly unlikely, if not mathematically impossible, that one mathematical model should fit them all" (Herdan, 1960, p. 207). ...
... One tradition studies the parametric statistical distributions of economic outcomes explained as ergodic solutions to stochastic processes. This tradition has a long history in economics dating back to Gibrat (1931), Kalecki (1945), Champernowne (1953), and Simon (1955) who looked to formalize the empirical findings of Pareto (1987aPareto ( , 1987b on the distribution of personal income and wealth and has experienced a resurgence in political economy (Cottrell et al., 2009;Alfarano et al., 2012;Shaikh et al., 2014;Shaikh, 2020) with the development of econophysics (Mantegna and Stanley, 1999;Yakovenko, 2007;Rosser Jr., 2008b;McCauley, 2009;Lux, 2016). The econophysics approach to political economy argues in terms of well-specified micro-kinetic dynamics punctuated by stochastic variation in order to derive the statistical equilibrium of the system as the limiting ergodic distribution. ...
Article
Economic systems produce robust statistical patterns in key sate variables including prices and incomes. Statistical equilibrium methods explain the distributional properties of state variables as arising from specific institutional, environmental, and behavioral postulates. Two broad traditions have developed in political economy with the complementary aim of conceptualizing economic processes as irreducibly statistical phenomena but differ in their methodologies and interpretations of statistical explanation. These conceptual differences broadly mirror the methodological divisions in statistical mechanics, but also emerge in distinct ways when considered in the context of social sciences. This paper surveys the use of statistical equilibrium methods in analytical political economy and identifies the leading methodological and philosophical questions in this growing field of research.
... If individuals' income grew randomly over time, it tended to create a skewed distribution of income. This process became known as a 'stochastic model' of income (For early stochastic models, see Ref. [24][25][26]. For more recent work, see Ref. [27][28][29]). ...
Article
Full-text available
This paper investigates a new approach to understanding personal and functional income distribution. I propose that hierarchical power—the command of subordinates in a hierarchy—is what distinguishes the rich from the poor and capitalists from workers. Specifically, I hypothesize that individual income increases with hierarchical power, as does the share of individual income earned from capitalist sources. I test this idea using evidence from US CEOs, as well as a numerical model that extrapolates the CEO data. The results indicate that income tends to increase with hierarchical power, as does the capitalist composition of income. This suggests that hierarchical power may be a determinant of both personal and functional income.
... We present a purely statistical model of the evolution of the distribution of wealth similar to that in Champernowne (1953); Luttmer (2016); Gabaix, Lasry, Lions, and Moll (2016); and Benhabib, Bisin, and Luo (2017) that adds consideration of the role of family firms in shaping the distribution of wealth. We find that when we feed into the model a time series for the volatility of idiosyncratic shocks to firm value similar to that observed by Herskovic et al. (2016) for public firms over the past 100 years, our model generates a time path for top wealth shares over this time period similar to that found by Saez and Zucman (2016) and Gomez (2019) (see Figure 8). ...
... Exactly 100 years ago, Hugh Dalton published his classic paper in the Economic Journal (Dalton 1920) which formalized axioms of income inequality measurement to address the disparate measures used by the growing number of empirical studies on income inequality. The dynamics of inequality was also under study in the interwar period, with Gibrat's (1931) volume Les Inégalités économiques, and Champernowne's celebrated work at this time on his fellowship thesis, which was not published till after the war (Champernowne 1953). ...
Article
This paper is an introduction to a special issue of the Journal of Economic Inequality which contains a selection of articles published in the Journal which bring economic perspectives and methods to bear on dimensions of inequality highlighted in the Sustainable Development Goals. The papers show that the study of economic inequality has much to contribute to the global policy discourse which is underpinned by the SDGs.
... The formal mathematical influence of Newton's system in economics, however, remained shelved for a century until its awkward appearance in the marginalist school of thought [46]. While stochastic models, and concepts of scalability and self-organization appear scattered throughout the mid-twentieth century [26, 6,36,50,62,39], modern econophysics [40] really developed in the last decade of the century and is defined by "the activities of physicists who are working on economics problems to test a variety of new conceptual approaches deriving from the physical sciences." [40]. ...
Article
Full-text available
A coherent statistical methodology is necessary for analyzing and understanding complex economic systems characterized by large degrees of freedom with non-trivial patterns of interaction and aggregation across individual components. Such a methodology was arguably present in Classical Political Economy, but was abandoned in the late nineteenth century with a theoretical turn towards a purely mechanical approach to understanding social and economic phenomena. Recent advances in economic theory that draw from information theory and statistical mechanics offers a compelling statistically based approach to understanding economic systems based on a general principle of maximum entropy for doing inference. We offer a brief overview of what we consider the state of maximum entropy reasoning in economic research.
... One tradition studies the parametric statistical distributions of economic outcomes explained as ergodic solutions to stochastic processes. This tradition has a long history in economics dating back to Gibrat (1931), Kalecki (1945) Champernowne (1953) and Simon (1955) who looked to formalize the empirical findings of Pareto (1987aPareto ( , 1987b on the distribution of personal income and wealth and has experienced a resurgence in political economy (Alfarano, Milaković, Irle, & Kauschke, 2012;Cottrell, Cockshott, Michaelson, Wright, & Yakovenko, 2009;Shaikh, 2020;Shaikh, Papanikolaou, & Wiener, 2014) with the development of econophysics (Lux, 2016;Mantegna & Stanley, 1999;McCauley, 2009;Rosser Jr., 2008b;Yakovenko, 2007). The econophysics approach to political economy argues in terms of well-specified micro-kinetic dynamics punctuated by stochastic variation in order to derive the statistical equilibrium of the system as the limiting ergodic distribution. ...
Preprint
Full-text available
Economic systems produce robust statistical patterns in key sate variables including prices and incomes. Statistical equilibrium methods explain the distributional properties of state variables as arising from specific institutional and behavioral postulates. Two traditions have developed in political economy with the complementary aim of conceptualizing economic processes as irreducibly statistical phenomena, but differ in their methodologies and interpretations of statistical explanation. These conceptual differences broadly mirror the methodological divisions in statistical mechanics, but also emerge in distinct ways when considered in the context of social sciences. This paper surveys the use of statistical equilibrium methods in analytical political economy and identifies the leading methodological and philosophical questions in this growing field of research.
... The mechanism leading to a Pareto distribution for incomes dates back to Champernowne (1953). It relies on the existence of a lower bound h. ...
Article
Full-text available
OECD countries have experienced a large increase in top wage inequality. Atkinson (2008) attributes this phenomena to the superstar theory leading to a Pareto tail in the wage distribution with a low Pareto coefficient. Do we observe a similar phenomena for academic wages? We examine wage formation in a public US university using for each academic rank a hybrid mixture formed by a lognormal distribution for regular wages and a Pareto distribution for top wages, using a Bayesian approach. The presence of superstars wages would imply a higher dispersion in the Pareto tail than in the lognormal body. We concluded that academic wages are formed in a different way than other top wages. There is an effort to propose competitive wages to some young Assistant Professors. But when climbing up the wage ladder, we found a phenomenon of wage compression which is just the contrary of a superstar phenomenon.
... Piketty's "Capital in the XXIst Century" is a worldwide bestseller [1], maybe soon joined by his second book, "Capital & Ideology" [2]. Statistical models of income and wealth dynamics have a long history, starting with Champernowne [3] and Angle [4], with a particular upsurge in the "Econophysics" literature since 2000 -for recent reviews see e.g. [5][6][7], and, for economics papers [8,9]. ...
Preprint
We propose a highly schematic economic model in which, in some cases, wage inequalities lead to higher overall social welfare. This is due to the fact that high earners can consume low productivity, non essential products, which allows everybody to remain employed even when the productivity of essential goods is high and producing them does not require everybody to work. We derive a relation between heterogeneities in technologies and the minimum Gini coefficient required to maximize global welfare. Stronger inequalities appear to be economically unjustified. Our model may shed light on the role of non-essential goods in the economy, a topical issue when thinking about the post-Covid-19 world.
... Alternatively it arises as the monkey-typing process of Mandelbrot and Miller [14], which can be recast in terms of particle accumulation. It is also an invariant distribution for the Markov chain on the collections {n 1 , · · · , n k } that moves n j to n j +1 or n j − 1 (the latter only when n j = 0) with given probabilities r j + and r j − , in which case q j = r j + /r j − (used already in [4], see also [9]). Yet another way arises from packing randomly k energy levels with indistinguishable particles, each jth level having given number L j of states, so that given numbers N j of particles go to the L j states of the jth level (with all possible distribution equally probable). ...
Preprint
Full-text available
Combining intuitive probabilistic assumptions with the basic laws of classical thermodynamics, using the latter to express probabilistic parameters in terms of the thermodynamic quantities, we get a simple unified derivation of the fundamental ensembles of statistical physics avoiding any limiting procedures, quantum hypothesis and even statistical entropy maximization. This point of view leads also to some related classes of correlated particle statistics.
... These findings led some authors (Champernowne, 1953;Simon, 1955) began to relate Zipf and Gibrat's Laws. They found a clear relationship between cities' growth rates and the Pareto distribution, which arises in a natural way conditional upon the time series of the population growth to follow the Gibrat's Law. ...
Chapter
By using census data from 1835 to 2005, this chapter studies the urban hierarchy in Colombia and its regions. The chapter focuses on three issues: firstly, the city size distribution by means of Zipf’s law and Gibrat’s law; secondly, evolution in the population growth models; and, thirdly, the empirical validation of the point made by Gabaix (Q J Econ 114(3):739–767, 1999b) on the coincidence between national and regional population patterns. Using the adjusted rank–size relationship and non-parametric techniques, we find that city size distributions follow Zipf’s power law, and also that Gibrat’s law holds at the national level and partially for the regions over the second half of the twentieth century. These results are consistent with changes in the population growth model from the mid-fifties at national and regional levels.
... In the earlier studies [Champernowne 1953;Gibrat 1931], the dynamics of income are described as a stochastic process and they determine their own probabilistic distributions. This early type of modeling is called «one-agent» approach, since the deviation of income is considered independent for each economic agent. ...
Article
Full-text available
The given article is devoted to agent-based approach used to the construction of income distribution models, at which the economic systems are designed as an aggregate of autonomous interactive agents. The considered models open a new way for the income distribution analysis, which manifests itself in the interaction between a large number of economic agents, which allows to set between the desired stationary distributions.
... Dado esto, el bienestar general de la sociedad aumenta en mayor proporción cuando se incrementa la utilidad del grupo menos favorecido (Pigou, 1932). Por ende, la función de bienestar social juega un rol clave en la determinación de la distribución óptima de ingresos, en la que el estado inicial de esta última no necesariamente constituye un óptimo de Pareto (Champernowne, 1953;Thurow, 1971). ...
Article
Full-text available
Four measurements of per-capita education expenditure are used to analyze its effect on income inequality at the county-level in Chile. The latter is quantified through the Gini coefficient, Theil index, and the S80/S20 and S90/S10 ratios. The longitudinal study considers 316 Chilean counties in order to estimate a fractional response probit model for the Gini coefficient and random-effects Tobit models for the remaining indicators of inequality. Main findings show that an increasing per-capita education expenditure would reduce the income inequality measured by Theil index, but it would accentuate the gap between the poorest and wealthiest decile/quantile. And a greater education expenditure on personnel and teaching staff would enlarge the Gini coefficient and S90/S10 ratio. In addition, income inequality at the county-level is exacerbated by a larger indigenous and rural population. Finally, the evidence reveals that a convex Kuznets-curve exists for the extreme values of income distribution.
... Already at the beginning of the twentieth century, pointing to wealth concentration in the economy and society, economist and sociologist V. Pareto suggested the so-called Pareto wealth distribution as an empirical regularity (Pareto 1897), while economic statistician C. Gini developed ingenious statistical techniques to represent wealth inequality through the so-called Gini Index (Gini 1912). In this context, a stream of the relevant literature draws upon Champernowne (1953) and Rutherford (1955) to develop an elegant formal modelling strategy that explains wealth concentration and the Pareto wealth distribution under conditions of financial market efficiency, involving stochastic distribution of financial returns across individuals investing in that market (Levy 2005;Levy and Levy 2003). This modelling strategy considers financial investment as a multiplicative process closely related to a Kesten process (Kesten 1973;Redner 1990). ...
Article
Full-text available
Wealth inequality is an important matter for economic theory and policy. The recent rise in wealth inequality has been discussed in connection with the recent development of active global financial markets. The existing literature on wealth distribution links wealth inequality to a variety of drivers. Our approach develops a minimalist modelling strategy that combines three featuring mechanisms: active financial markets, individual wealth accumulation and compound interest structure. We provide mathematical proof that accumulated financial investment returns involve ever-increasing wealth concentration and inequality across individual investors most of the time. This cumulative effect over space and time depends on financial accumulation processes, including under efficient financial markets, which generate a fair investment game that individual investors repeatedly play through time.
... The second is that the This paper builds on previous studies that have linked random firm-level growth within an industry to Pareto tails in the cross-sectional distribution of firm size. Early examples include Champernowne (1953) and Simon (1955), who showed that Pareto tails in stationary distributions can arise if time series follow Gibrat's law along with a reflecting lower barrier. Since then it has been well understood that Gibrat's law can generate Pareto tails for the firm size distribution in models where firm dynamics are exogenously specified. ...
Preprint
This paper shows that the power law property of the firm size distribution is a robust prediction of the standard entry-exit model of firm dynamics. Only one variation is required: the usual restriction that firm productivity lies below an ad hoc upper bound is dropped. We prove that, after this small modification, the Pareto tail of the distribution is predicted under a wide and empirically plausible class of specifications for firm-level productivity growth. We also provide necessary and sufficient conditions under which the entry-exit model exhibits a unique stationary recursive equilibrium in the setting where firm size is unbounded.
... Therefore, the third category of related literature alternatively applied the microfoundation approach to explain the highly skewed distribution of city sizes and the Gibrat's law. The significant works in this fields are studies conducted by Champernowne (1953), Simon (1955), Gabaix (1999) and Cordoba (2003). The last category of related research has integrated the concepts of random growth process and microfoundation, which are studies of Henderson (1974) and Rossi-Hansberg and Wright (2007). ...
Conference Paper
Full-text available
This study formulated two sets of bipartite data, which are (1) the relationship between province and occupation of employee and (2) the relationship between province and industrial classification of employer. Both bipartite sets were tabulated from raw data obtained from the official Labor Force Survey of 1978, 1988, 1998, 2008 and 2017. The network analysis was applied to both constructed bipartite data, enabling the visualization of cross-province structure of employment and the computation of centrality indices. The computed centrality indices and the visualized network graphs of nationwide employment indicate that Bangkok has been the center of employment since 1978, with the topological structure of hub and spoke. These findings affirm the evidence of continuously increasing agglomeration force of Bangkok inducing the pattern of mono-centric growth pole of Thailand.
... Practically all economic and informational variables have been shown since the 1960s to belong to the D 2 class, or at least the intermediate subexponential class (which includes the lognormal),[10],[11],[12],[13],[4], along with social variables such as size of cities, words in languages, connections in networks, size of firms, incomes for firms, macroeconomic data, monetary data, victims from interstate conflicts and civil wars[14],[7], operational risk, damage from earthquakes, tsunamis, hurricanes and other natural calamities, income inequality[15], etc. Which leaves us with the more rational question: where are Gaussian variables? ...
Preprint
What do binary (or probabilistic) forecasting abilities have to do with overall performance? We map the difference between (univariate) binary predictions, bets and "beliefs" (expressed as a specific "event" will happen/will not happen) and real-world continuous payoffs (numerical benefits or harm from an event) and show the effect of their conflation and mischaracterization in the decision-science literature. We also examine the differences under thin and fat tails. The effects are: A- Spuriousness of many psychological results particularly those documenting that humans overestimate tail probabilities and rare events, or that they overreact to fears of market crashes, ecological calamities, etc. Many perceived "biases" are just mischaracterizations by psychologists. There is also a misuse of Hayekian arguments in promoting prediction markets. We quantify such conflations with a metric for "pseudo-overestimation". B- Being a "good forecaster" in binary space doesn't lead to having a good actual performance}, and vice versa, especially under nonlinearities. A binary forecasting record is likely to be a reverse indicator under some classes of distributions. Deeper uncertainty or more complicated and realistic probability distribution worsen the conflation . C- Machine Learning: Some nonlinear payoff functions, while not lending themselves to verbalistic expressions and "forecasts", are well captured by ML or expressed in option contracts. D- Fattailedness: The difference is exacerbated in the power law classes of probability distributions.
... Interestingly, the numbers of comments constitute a power-law distribution (Figure 4(d)), which is a common phenomenon in various fields (e.g. personal incomes and frequencies of words' occurrence) (Champernowne, 1953). The distribution of app sizes (Figure 4(a)) and continual annual updates (Figure 4(b)) also exhibits similar characteristics, although they do not fit with the power-law statistically. ...
Article
Full-text available
Purpose: In the architecture, engineering and construction (AEC) industry, technology developers have difficulties fully understanding user needs due to the high domain knowledge threshold and the lack of effective and efficient methods to minimise information asymmetry between technology developers and AEC users. Design/Methodology/Approach: A synthetic approach combining domain knowledge and text-mining techniques is proposed to help capture user needs, which is demonstrated using BIM apps as a case. The synthetic approach includes the: (i) collection and cleansing of BIM apps’ attribute data and users’ comments; (ii) incorporation of domain knowledge into the collected comments; (iii) performance of a sentiment analysis to distinguish positive and negative comments; (iv) exploration of the relationships between user sentiments and BIM apps’ attributes to unveil user preferences; and (v) establishment of a topic model to identify problems frequently raised by users. Findings: The results show that those BIM app categories with high user interest but low sentiments or supplies, such as ‘reality capture’, ‘interoperability’ and ‘structural simulation and analysis’, should deserve greater efforts and attention from developers. BIM apps with continual updates and of small-size are more preferred by users. Problems related to the ‘support for new Revit’, ‘import & export’ and ‘external linkage’ are most frequently complained by users. Originality/Value: The main contributions of this work include: (i) the innovative application of text mining techniques to identify user needs to drive BIM apps development; and (ii) the development of a synthetic approach to orchestrating domain knowledge, text-mining techniques (i.e. sentiment analysis and topic modelling) and statistical methods in order to help extract user needs for promoting the success of emerging technologies in the AEC industry.
... Examples include firm size[4,25,36], city size[12,13,18,23,24], frequency of words[19,46], income and wealth[9,22,29,35,38,45], consumption[39,40], carbon dioxide emissions[3], and natural gas and oil production[5], among others. See Gabaix[14] for a review. ...
Article
Full-text available
Power-law distributions explain a variety of natural and man-made processes spanning various disciplines including economics and finance. This paper demonstrates that the distribution of agricultural land size in the United States is best described by a power-law distribution. Maximum likelihood estimation is carried out using county-level data of over 3000 observations gathered at five-year intervals by the USDA Census of Agriculture. Our analysis indicates that U.S. agricultural land size is heavy-tailed, that variance estimates generally do not converge, and that the top 5% of agricultural counties account for about 25% of agricultural land between 1997 and 2012. The goodness of fit of power-law distribution is evaluated using likelihood ratio tests and regression-based diagnostics. The power-law distribution of farm size has important implications for the design of more efficient regional and national agricultural policies as counties close to the mean account for little of the cumulative distribution of total agricultural land.
... Such a marked transformation of the social structure may have resulted from a prolonged drought event (Li et al. 2018), which provided unfavorable agricultural Fig. 7 Correlation of grave volume (a) and quantity of grave goods (b) with grave area at the Liangzhu settlement. Shaded envelope denotes the 95% confidence interval and solid line highlights the nonlinear relationship between these variables Stochastic process has been increasingly used to study the generative mechanism of size distribution of individual or household wealth (Champernowne 1953;Simon 1955;Angle 1986;Reed and Jorgensen 2004). An empirical probability density function may have a known relationship with a stochastic process (Reed 2003). ...
Article
Full-text available
Although there is little question that societal scale and mode of production from foraging to farming correlate with increases in economic inequality, there is no consensus over the relative importance of those factors or the role of institutions in the variance of inequality across time and space. To better understand the dynamics of economic inequality, it is necessary to expand our analytical horizon beyond the present into the deeper past. However, an analytical protocol especially oriented towards the systematic study of economic inequality with archaeological data is lacking. Here we propose the utility of grave size as a reliable proxy for estimating prehistoric social inequality and provide a methodological framework for analyzing this type of data. Our case studies using grave-size data from two Neolithic settlements in North and East China suggest that the asymmetric double Pareto distribution can be used as an alternative model to fit to the size distribution of grave wealth usually skewed and long-tailed. Based on the analytical connection between the probability density function and the Lorenz curve, a parsimonious algebraic expression of the Gini coefficient was derived. This analytical protocol also can serve as a convenient tool for quantifying economic inequality in prehistoric societies using other types of archaeological data such as land and house areas.
... When we apply this process to many individuals, it creates a skewed distribution of income. For early models, see Champernowne (1953), Simon (1955), and Rutherford (1955). For more recent work, see Gabaix et al. (2016), Nirei and Aoki (2016), and Toda (2012). ...
Preprint
Full-text available
What makes the rich different? Are they more productive, as mainstream economists claim? I offer another explanation. What makes the rich different, I propose, is hierarchical power. The rich command hierarchies. The poor do not. It is this greater control over subordinates, I hypothesize, that explains the income and class of the very rich. I test this idea using evidence from US CEOs. I find that the relative income of CEOs increases with their hierarchical power, as does the capitalist portion of their income. This suggests that among CEOs, both income size and income class relate to hierarchical power. I then use a numerical model to test if the CEO evidence extends to the US general public. The model suggest that this is plausible. Using this model, I infer the relation between income size, income class, and hierarchical power among the US public. The results suggests that behind the income and class of the very rich lies immense hierarchical power.
... The following model adds two features to Champernowne (1953) and Córdoba (2008)'s models. First, a i and c i have full flexibility. ...
Preprint
Full-text available
The Pareto distribution is known for its wide range of applications, including the distribution of firm sizes. Exhaustive databases on firm sizes showed deviations from the Pareto firm size distribution for the smallest and largest firm sizes. Therefore, stochastic models of firm dynamics reproducing a Pareto firm size distribution could be further generalized in order to also reproduce these deviations. Based on Córdoba (2008), we build a model of firm dynamics which can generate a wide variety of steady-state distributions and we make clear the relationship between the growth dynamics of the firms and the resulting shape of the firm size distribution. To the light of our model, we analyse the links between the observed firm dynamics and the shape of FSD in Belgium between 2006 and 2013. JEL Classification: L11
... Garvey (1952) wymienia czynniki społeczne oraz ekonomiczne, czas, stopień urbanizacji, lokalizację geograficzną, czynniki demograficzne, następstwa polityki społecznej oraz publicznej państwa, a także periodyczne zmiany systemu gospodarczego 3 . Champernowne (1953) uwzględnił w swoim modelu m.in. wiek, zawód i dochód minimalny. ...
Article
Full-text available
Celem badań jest próba identyfikacji czynników wpływających na czas pracy i wynagrodzenia pracowników obu płci (kobiet i mężczyzn) w Polsce. Modele wynagrodzeń i aktywności zawodowej zawierają zmienne objaśniające reprezentujące cechy pracowników i strukturę ich rodzin oraz atrybuty miejsca pracy. Badania zrealizowano w oparciu o dane BAEL 1Q2009 dotyczące osób świadczących pracę w miesiącu poprzedzającym badanie. Na podstawie oszacowanych modeli wyróżniono determinaty charkterystyczne dla kobiet i mężczyzn.
... Among the different phenomena in economics that follow a Pareto distribution, such as wealth (Pareto, 1896;Champernowne, 1953;Benhabib et al., 2011), firm size (Gibrat, 1931;Ijiri and Simon, 1977;Axtell, 2001;Cabral and Mata, 2003;Luttmer, 2007), and city size (Zipf, 1949;Gabaix, 1999), the last one has been at the center of a lively debate on whether it is better approximated by a Pareto or by a lognormal distribution (Eeckhout, 2004;Levy, 2009;Eeckhout, 2009;Malevergne et al., 2009;Rozenfeld et al., 2011;Berry and Okulicz-Kozaryn, 2012;Hsu, 2012;Ioannides and Skouras, 2013;González-Val et al., 2015;Fazio and Modica, 2015). ...
Chapter
The exact shape of the distribution of city size is subject to considerable scholarly debate, as competing theoretical models yield different implications. The alternative distributions being tested are typically the Pareto and the log-normal, whose finite sample upper tail behavior is very difficult to tell apart. Using data at different levels of aggregation (census blocks and cities) we show that the tail behavior of the distribution changes upon aggregation, and the final result depends crucially on the shape of the distribution of the number of elementary units associated with each aggregate element.
... Thus, in our case, due to the presence of extreme outliers, the PITSE with 78% ARE which is more robust as compared to MLE is applied for estimating the Pareto tail index, and the results are given in Table 7. According to Champernowne [12] and Steindl [47], the Pareto tail index is a useful measure of income inequality, where smaller value of α indicates greater inequality in the income distribution. Thus, from Table 7, due to the smallest value ofα, it is clear that income inequality is slightly bigger among the rich households in the year 2014 as compared to the other years. ...
Article
The presence of extreme outliers in the upper tail data of income distribution affects the Pareto tail modeling. A simulation study is carried out to compare the performance of three types of boxplot in the detection of extreme outliers for Pareto data, including standard boxplot, adjusted boxplot and generalized boxplot. It is found that the generalized boxplot is the best method for determining extreme outliers for Pareto distributed data. For the application, the generalized boxplot is utilized for determining the exreme outliers in the upper tail of Malaysian income distribution. In addition, for this data set, the confidence interval method is applied for examining the presence of dragon-kings, extreme outliers which are beyond the Pareto or power-laws distribution.
Thesis
p>Since the last decade an increasing body of theoretical literature has explored the endogenous determination of inequality and its role in affecting aggregate developments. The papers presented in this thesis try to make a contribution about the policy issue of improving equity conditions when imperfections in credit market limit the chance of social mobility of the poor. Following a brief introduction, the first chapter investigates if equity progress can be achieved by a direct action on the prime origin of inequality, as incomplete credit markets are commonly understood. Within a standard framework of banking and customer relationship, the chapter puts forward a novel factor affecting the equilibrium cost of credit, namely the incentive of the lender to undertake a costly screening technology in order to improve his private information about his own customers’ types. An interesting finding of the chapter is a positive relationship between the ex post market power of the informed lenders and the size of his ex ante investment in the screening technology. A pro-competitive regulation of imperfect credit market may prove counterproductive for lowering costs of loans, since it risks discouraging investment in the costly acquisition of information on the part of the lenders, then making even more severe the adverse selection problem constraining their supply of funds. As its main policy implication, the paper finds a limited scope for public action on capital markets to countervail the barriers to a large access to credit coming from imperfect information. The second chapter deals with the usual tool for equity, by theoretically exploring conditions for demand for redistribution to be politically sustainable in the long run. Differently from recent literature on political economy, the location of the median voter and/or his preferred policy is allowed to endogenously shift over time, possibly reflecting the stance of redistribution in previous period. As a result, a large variety of political equilibria is proved to occur in steady state; they depend on the strength by which economic structure by itself would widen or restrict inequality over time and the extent to which it can be counteracted by feasible redistribution. Among the main findings, the dynamic feedback between pure economic factors and political input driving social mobility may hinder the path to steady state equilibrium, endogenously determining fluctuations in both redistribution and inequality. The third chapter empirically assesses the impact of social security on aggregate private savings, based on Italian experience in the last fifty years. The variety of recent reforms in the Italian pension system proves to exert a significant effect on consumption spending, along with domestic demographic changes.</p
Article
Full-text available
Although the determinants of income are complex, the results are surprisingly uniform. To a first approximation, top incomes follow a power-law distribution, and the redistribution of income corresponds to a change in the power-law exponent. Given the messiness of the struggle for resources, why is the outcome so simple? This paper explores the idea that the (re)distribution of top incomes is uniform because it is shaped by a ubiquitous feature of social life, namely hierarchy. Using a model first developed by Herbert Simon and Harold Lydall, I show that hierarchy can explain the power-law distribution of top incomes, including how income gets redistributed as the rich get richer.
Article
Full-text available
We elucidate that a Boltzmann-like income distribution will emerge spontaneously in a long-run Arrow–Debreu economy, which is used to describe the well-functioning market economy. The emergence of such an income distribution can be regarded as a result of maximizing the entropy of the long-run Arrow–Debreu economy, which measures the extent of choice-freedom of permissible collective decisions offered to social members. By analyzing household income data of the United Kingdom from 2000 to 2015, we observe that the income structure of a market-economy country consists of three parts: super-low income class (i.e., unemployed households), low- and middle-income class, and top income class. The empirical analyses show that the low- and middle-income class (about 90%∼95% of populations) exactly obeys the Boltzmann-like income distribution. By contrast, top income class and super-low income class undermine the setting for Arrow–Debreu economy, and therefore do not conform to the Boltzmann-like distribution.
Article
We define generalized Pareto curves as the curve of inverted Pareto coefficients b(p), where b(p) is the ratio between average income above rank p and the p‐th quantile Q(p) (i.e., ). We use them to characterize income distributions. We develop a method to flexibly recover a continuous distribution based on tabulated income data as is generally available from tax authorities, which produces smooth and realistic shapes of generalized Pareto curves. Using detailed tabulations from quasi‐exhaustive tax data, we show the precision of our method. It gives better results than the most commonly used interpolation techniques for the top half of the distribution.
Thesis
This thesis covers several topics on the distribution of income and wealth. In the first chapter, we develop a new methodology to exploit tabulations of income and wealth such as the one published by tax authorities. In it, we define generalized Pareto curves as the curve of inverted Pareto coefficients b(p), where b(p) is the ratio between average income or wealth above rank p and the p-th quantile Q(p) (i.e. b(p)=E[X|X>Q(p)]/Q(p)). We use them to characterize entire distributions, including places like the top where power laws are a good description, and places further down where they are not. We develop a method to flexibly recover the entire distribution based on tabulated income or wealth data which produces smooth and realistic shapes of generalized Pareto curves.In the second chapter, we present a new approach to combine survey data with tax tabulations to correct for the underrepresentation of the rich at the top. It endogenously determines a "merging point'' between the datasets before modifying weights along the entire distribution and replacing new observations beyond the survey's original support. We provide simulations of the method and applications to real data. The former demonstrate that our method improves the accuracy and precision of distributional estimates, even under extreme assumptions, and in comparison to other survey correction methods using external data. The empirical applications show that not only can income inequality levels change, but also trends.In the third chapter, we estimate the distribution of national income in thirty-eight European countries between 1980 and 2017 by combining surveys, tax data and national accounts. We develop a unified methodology combining machine learning, nonlinear survey calibration and extreme value theory in order to produce estimates of pre-tax and post-tax income inequality, comparable across countries and consistent with macroeconomic growth rates. We find that inequality has increased in a majority of European countries, especially between 1980 and 2000. The European top 1% grew more than two times faster than the bottom 50% and captured 18% of regional income growth.In the fourth chapter, I decompose the dynamics of the wealth distribution using a simple dynamic stochastic model that separates the effects of consumption, labor income, rates of return, growth, demographics and inheritance. Based on two results of stochastic calculus, I show that this model is nonparametrically identified and can be estimated using only repeated cross-sections of the data. I estimate it using distributional national accounts for the United States since 1962. I find that, out of the 15pp. increase in the top 1% wealth share observed since 1980, about 7pp. can be attributed to rising labor income inequality, 6pp. to rising returns on wealth (mostly in the form of capital gains), and 2pp. to lower growth. Under current parameters, the top 1% wealth share would reach its steady-state value of roughly 45% by the 2040s, a level similar to that of the beginning of the 20th century. I then use the model to analyze the effect of progressive wealth taxation at the top of the distribution.
Preprint
Full-text available
Pareto distribution and exponential distribution are related by the generalized Pareto distribution, which has been proposed to describe the income structure of the total population. The underlying mechanism for driving the Pareto distribution has been known as the Matthew effect of income accumulation. Today, the Pareto distribution has been observed universally in the richest class (1%~3% of populations); however, this distribution could dominate a larger proportion of populations when the investigation dated back to Renaissance Europe, Hungarian medieval society, and ancient Egypt. By contrast, the underlying mechanism for driving exponential distribution is due to the equal opportunity of market competition, which radically differs from the Matthew effect. Here, we empirically find that, during the last 40 years, the income structure of different market-economy countries uniformly exhibits a two-class pattern, in which a larger proportion of populations is evolving to an exponential distribution, while the Pareto distribution is squeezed into a fairly small proportion. In particular, we empirically show how the income structure of China evolved to an exponential distribution after the market-oriented economic reformation. The finding of a larger proportion of populations evolving to an exponential income distribution may reveal a potential trend of human civilization towards equal opportunity.
Preprint
Full-text available
Pareto distribution and exponential distribution are related by the generalized Pareto distribution, which has been proposed to describe the income structure of the total population. The underlying mechanism for driving the Pareto distribution has been known as the Matthew effect of income accumulation. Today, the Pareto distribution has been observed universally in the richest class (1%~3% of populations); however, this distribution could dominate a larger proportion of populations when the investigation dated back to Renaissance Europe, Hungarian medieval society, and ancient Egypt. By contrast, the underlying mechanism for driving exponential income distribution is due to the equal opportunity of market competition, which radically differs from the Matthew effect. Here, we empirically find that, during the last 40 years, the income structures of different market-economy countries uniformly exhibit a two-class pattern, in which the great majority of populations obeys an exponential distribution and only the remaining (richest) part follows a Pareto distribution. In particular, we empirically show how the income structure in China evolved to an exponential distribution after the market-oriented economic reformation. The finding of a larger proportion of populations evolving to an exponential income distribution may reveal a potential trend of human civilization towards equal opportunity.
Article
Full-text available
In this paper, the nonlinear distribution of employment across Spanish municipalities is analyzed. Also, we explore new properties of the family of generalized power law (GPL) distributions and explore its hierarchical structure, then we test its adequacy for modeling employment data. A new subfamily of heavy-tailed GPL distributions that is right tail equivalent to a Pareto (power-law) model is derived. Our findings show on the one hand that the distribution of employment across Spanish municipalities follows a power-law behavior in the upper tail and, on the other hand, the adequacy of GPL models for modeling employment data in the whole range of the distribution.
Preprint
Full-text available
The novel coronavirus (COVID-19) was first identified in China in December 2019. Within a short period of time, the infectious disease has spread far and wide. This study focuses on the distribution of COVID-19 confirmed cases in China---the original epicenter of the outbreak. We show that the upper tail of COVID-19 cases in Chinese cities is well described by a power law distribution, with exponent less than one, and that a random proportionate growth model predicated by Gibrat's law is a plausible explanation for the emergence of the observed power law behavior. This finding is significant because it implies that COVID-19 cases in China is heavy-tailed and disperse, that a few cities account for a disproportionate share of COVID-19 cases, and that the distribution has no finite mean or variance. The power-law distributedness has implications for effective planning and policy design as well as efficient use of government resources.
Article
Wealth and income inequality has attracted intensive interest in recent years due to its great significance both in reality and theory. Inspection on individual behaviors in a microscopic view would be useful in clarifying possible reasons for inequality and proper policies for reducing inequality and poverty. This paper presents an inhomogeneous agent-based model to explore the emergence of income inequality, in which individuals with varied qualities work, consume and invest. In despite of the small attribute difference for individuals, large income/wealth inequality and class differentiation naturally occur through a mechanism of capital (investment) income, which shares some analogy to the endogenous growth. The obtained income distribution is well described with an exponential law at smaller values and a power law at large values. Education, which is modeled as increasing the average productivity and decreasing the productivity width, is able to improve the equality and lower the Gini coefficient. The uplift of salary level hampers the speed of investment (industrialization) and the short-term income, but it brings long-term benefits of higher efficiency and equality. These results support the potential capacity of the model as a basic and open framework to investigate multifarious questions regarding income inequality.
Article
In this work we explain the size distribution of business firms using a stochastic growth process that reproduces the main stylized facts documented in empirical studies. The steady state solution of this process is a three-parameter Dagum distribution, which possibly combines strong unimodality with a Paretian upper tail. Thanks to its flexibility, the proposed distribution is able to fit the whole range of firm size data, in contrast with traditional models that typically focus on large businesses only. An empirical application to Italian firms illustrates the practical merits of the Dagum distribution. Our findings go beyond goodness-of-fit per se, and shed light on possible connections between stochastic elements that influence firm growth and the meaning of parameters that appear in the steady state distribution of firm size. These results are ultimately relevant for studies into industrial organization and for policy interventions aimed at promoting sustainable growth and monitoring industrial concentration phenomena.
Chapter
This chapter provides a review of the link between central place theory and the power laws for cities. A theory of city size distribution is proposed via a central place hierarchy a la Christaller (1933) either as an equilibrium results or an optimal allocation. Under a central place hierarchy, it is shown that a power law for cities emerges if the underlying heterogeneity in economies of scale across good is regularly varying. Furthermore, we show that an optimal allocation of cities conforms with a central place hierarchy if the underlying heterogeneity in economies of scale across good is a power function.
ResearchGate has not been able to resolve any references for this publication.