## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Journal of Applied Physics

We describe a physically based derivation of the Weibull distribution with respect to fragmentation processes. In this approach we consider the result of a single‐event fragmentation leading to a branching tree of cracks that show geometric scale invariance (fractal behavior). With this approach, because the Rosin–Rammler type distribution is just the integral form of the Weibull distribution, it, too, has a physical basis. In further consideration of mass distributions developed by fragmentation processes, we show that one particular mass distribution closely resembles the empirical lognormal distribution. This result suggests that the successful use of the lognormal distribution to describe fragmentation distributions may have been simply fortuitous. © 1995 American Institute of Physics.

To read the full-text of this research,

you can request a copy directly from the authors.

... For B=2 , we have the Rayleigh distribution. For B=2. 5 and B=3. 6 , the Weibull distribution approximates the log-normal distribution and the normal distribution respectively. ...

... In the same reference, the hazard function is given as: (22) Eq. (22) in the formalism of (17) becomes: In the Figure 13, it is given the hazard function of κ-Weibull, where parameter κ has a value fixed to κ =0. 5 . As in the previous figures, λ=0.25 . ...

... Brown and Wohletz [Brown et Wohletz, 1995] have demonstrated that "the Weibull distribution arises naturally as a consequence of the fragmentation process being fractal" [Jonasz et Fournier, 2007]. "Fragmentation process is a cleavage of bonds between the component particles that can also be proven to lead to the Weibull distribution" [Tenchov et Yanev, 1986], [Jonasz et Fournier, 2007]. ...

Here we will consider a function of κ-statistics, the κ-Weibull distribution, and compare it to the well-known Weibull distribution. The κ-Weibull will be also compared to the 3-parameter extended Weibull function, obtained according to the Marshall-Olkin extended distributions. The log-logistic distribution will be considered for comparison too, such as the exponentiated Weibull, the Burr and the q-Weibull distributions. The most important observation, coming from the proposed calculations, is that the κ-Weibull hazard function is strongly depending on the values of parameter κ, a parameter which is deeply influencing the behaviour of the tail of the probability distribution. As a consequence, the κ-Weibull function turns out to be quite relevant for generalizations of the Weibull approach to modeling failure times. Discussions about the Maximum Likelihood approach for Weibull, κ-Weibull and Burr distributions will be also given.

... By the way, this approach is not fully new. In literature of particle size analysis the idea of point processes has appeared already implicitly in the papers Brown [3], Brown and Wohletz [4] and Bernhardt [5], as explained in the Discussion section. ...

... The Introduction already mentioned that the use of point process ideas in particle size statistics, in particular the use of the intensity function, is not entirely new in the context of particle statistics. Indeed, it appears in hidden form and with another notation in the papers [3,4], known for a physicallybased derivation of the Weibull and RRSB distribution in the context of fragmentation. ...

... Brown and Wohletz [4] consider fragment or particle masses m (instead of x in the present paper) and use a function n(m) which "is the number distribution in units of ...

This paper re-considers the foundations of particle size statistics. While traditional particle size statistics consider their data as samples of random variables and use methods of classical mathematical statistics, here a particle sample is treated as a point process sample, and a suitable form of statistics is recommended. The whole sequence of ordered particle sizes is considered as a random variable in a suitable sample space. Instead of distribution functions, point process intensity functions are used. The application of point process data analysis is demonstrated for samples of fragments from single-particle crushing of glass balls. Three cases of data handling with point processes are presented: statistics for oversize particles, pooling of independent particle samples and pooling of piecewise particle data. Finally, the problem of goodness-of-fit testing for particle samples is briefly discussed. The point process approach turns out to be an extension of the classical approach, is simpler and more elegant, but retains all valuable traditional ideas. It is particularly strong in the analysis of oversize particles.
Graphical abstract

... i.e., a stretched exponential, which in probability theory texts is written W (x) = 1 − exp[−( x λ ) n ], has a very broad application in many fields of human knowledge [4][5][6][7][8][9][10][11], including many aspects of the materials science. In particular it well describes the distribution of the particle size after a fragmentation process, for example in milling, graining, or crushing operations [12][13][14][15][16][17]. Precisely in this context, the scientists who first employed Eq. (1) were Rosin and Rammler [18]. ...

... Therefore, in our process the distribution of clump sizes is obtained by the random accumulation of disks while the distributions studied in Refs. [12][13][14][15][16][17] are the result of a fragmentation process. The two processes differ from each other not only because the first takes place in 2D space whereas the second in 3D space, but also in the way the distributions are formed, one may say: bottom-up for the first, and top-down for the second. ...

Many manifestations of natural processes give rise to interesting morphologies; it is all too easy to cite the corrugation of the Earth's surface or of planets in general. However, limiting ourselves to 2D cases, the morphology to which crystal growth gives rise is also intriguing. In particular, it is interesting to study some characteristics of the cluster projection in 2D, namely the study of the shapes of the speckles (fractal dimension of their rims) or the distribution of their areas. Recently, for instance, it has been shown that the size cumulative distribution function (cdf) of “voids” in a corrole film on Au(111) is well described by the well known Weibull distribution. The present article focuses on the cdf of cluster areas generated by numerical simulations: the clumps (clusters) are generated by overlapping grains (disks) whose germs (disk centers) are chosen randomly in a 2000×2000 square lattice. The obtained cdf of their areas is excellently fitted to the Weibull function in a given range of surface coverage. The same type of analysis is also performed for a fixed-time clump distribution in the case of Kolmogorov-Johnson-Mehl-Avrami (KJMA) kinetics. Again, a very good agreement with the Weibull function is obtained.

... The problem of finding suitable models that explain these forms is still open. Three notable exceptions are Kolmogorov [23], Brown and Wohletz [24], and Fowler and Scheu [25], who modelled in different extent the fragmentation process. The first introduced the lognormal distribution to particle statistics and the others developed theories that lead to the RRSB and gamma distribution, respectively. ...

... Of a quite different nature is the model in [24], which also leads to the RRSB distribution. It uses physical ideas, in particular branching trees of cracks. ...

Modern particle size statistics uses many different statistical distributions, but these distributions are empirical approximations for theoretically unknown relationships. This also holds true for the famous RRSB (Rosin-Rammler-Sperling-Bennett) distribution. Based on the compound Poisson process, this paper introduces a simple stochastic model that leads to a general product form of particle mass distributions. The beauty of this product form is that its two factors characterize separately the two main components of samples of particles, namely, individual particle masses and total particle number. The RRSB distribution belongs to the class of distributions following the new model. Its simple product form can be a starting point for developing new particle mass distributions. The model is applied to the statistical analysis of samples of blast-produced fragments measured by hand, which enables a precise investigation of the mass-size relationship. This model-based analysis leads to plausible estimates of the mass and size factors and helps to understand the influence of blasting conditions on fragment-mass distributions.

... The lognormal distribution and Weibull distribution are commonly used to model skewed distributions; however, the physical origin of the distribution is not well understood. Brown and Wohletz (1995) derived the Weibull distribution with respect to the fragmentation process, in which a power law was used to describe the breakup of a single particle into smaller particles. The Weibull distribution has been widely used as particle size distribution for coarse grains (Fang et al., 1993;Kondolf and Adhikari, 2000). ...

... Lognormal distribution has been observed in particle growth or coagulation processes (Smoluchowski, 1918;Friedlander and Wang, 1966), in which aggregation process dominates the dynamics. On the other hand, Weibull distribution has been commonly observed in the fragmentation process of large particles (Brown and Wohletz, 1995). In the flocculation process, both the aggregation and fragmentation processes play an important role. ...

Floc size distribution is one of the key parameters to characterize flocculating cohesive sediment. An Eulerian–Lagrangian framework has been implemented to study the flocculation dynamics of cohesive sediments in homogeneous isotropic turbulent flows. Fine cohesive sediment particles are modeled as the dispersed phase by the discrete element method, which tracks the motion of individual particles. An adhesive contact model with rolling friction is applied to simulate the particle–particle interactions. By varying the physicochemical properties (i.e., stickiness and stiffness) of the primary particles, the dependence of the mathematical form of the floc size distribution on sediment properties is investigated. At the equilibrium state, the aggregation and breakup processes reach a dynamic equilibrium, in which construction by aggregation is balanced with destruction by breakup, and construction by breakup is balanced with destruction by aggregation. When the primary particles are less sticky, floc size distribution fits better with the lognormal distribution. When the primary particles are very sticky, both the aggregation of smaller flocs and breakup from larger flocs play an equally important role in the construction of the intermediate-sized flocs, and the equilibrium floc size distribution can be better fitted by the Weibull distribution. When the Weibull distribution develops, a shape parameter around 2.5 has been observed, suggesting a statistically self-similar floc size distribution at the equilibrium state.

... The Weibull distribution used here was first described as a family of curves [16] which has found applicability to describe the distribution of particle sizes following fragmentation or fractionation [17]. Brown and Wohletz provide a mechanistic derivation for the Weibull distribution which follows from repeated fragmentation of a larger structure, with each step resulting in a fractal fragmentation pattern (thus following a power law). ...

... In a scenario where all enhancers are equally active, a particular gene will be most strongly influenced by the closest enhancer (E2 in this figure). A Weibull model, as observed empirically in this analysis, can result from such a "superposition" of power-law distributions [17] should look carefully at whether there are multiple plausible causal genes, such as from paralogs, which exist within the 944 kb distance cutoff recommended here. ...

Background
A genome-wide association study (GWAS) correlates variation in the genotype with variation in the phenotype across a cohort, but the causal gene mediating that impact is often unclear. When the phenotype is protein abundance, a reasonable hypothesis is that the gene encoding that protein is the causal gene. However, as variants impacting protein levels can occur thousands or even millions of base pairs from the gene encoding the protein, it is unclear at what distance this simple hypothesis breaks down.
Results
By making the simple assumption that cis-pQTLs should be distance dependent while trans-pQTLs are distance independent, we arrive at a simple and empirical distance cutoff separating cis- and trans-pQTLs. Analyzing a recent large-scale pQTL study (Pietzner in Science 374:eabj1541, 2021) we arrive at an estimated distance cutoff of 944 kilobasepairs (95% confidence interval: 767–1,161) separating the cis and trans regimes.
Conclusions
We demonstrate that this simple model can be applied to other molecular GWAS traits. Since much of biology is built on molecular traits like protein, transcript and metabolite abundance, we posit that the mathematical models for cis and trans distance distributions derived here will also apply to more complex phenotypes and traits.

... From a formative perspective, the power-law SFD would indicate a single-event fragmentation 443 (for example during impact cratering) that leads to a branching tree of cracks that have a fractal 444 character (Turcotte et al., 1997, Schröder et al., 2021b. Whereas, the Weibull distribution is 445 thought to result from sequential fragmentation (Brown & Wohletz 1995) and it is largely used 446 in fracture and fragmentation theory (Grady and Kipp, 1987;Brown and Wohletz, 1995;447 Turcotte, 1997;McSaveney, 2002). In addition, the Weibull distribution (Weibull, 1951) is 448 often used to describe the particle distribution that is derived from grinding experiments (Rosin 449 and Rammler, 1933). ...

... From a formative perspective, the power-law SFD would indicate a single-event fragmentation 443 (for example during impact cratering) that leads to a branching tree of cracks that have a fractal 444 character (Turcotte et al., 1997, Schröder et al., 2021b. Whereas, the Weibull distribution is 445 thought to result from sequential fragmentation (Brown & Wohletz 1995) and it is largely used 446 in fracture and fragmentation theory (Grady and Kipp, 1987;Brown and Wohletz, 1995;447 Turcotte, 1997;McSaveney, 2002). In addition, the Weibull distribution (Weibull, 1951) is 448 often used to describe the particle distribution that is derived from grinding experiments (Rosin 449 and Rammler, 1933). ...

... The finding that the whole SFD of boulders on Dimorphos follows a Weibull distribution suggests that a past or ongoing processes may have decreased the number of meter-size boulders visible on Dimorphos' surface. Indeed, a Weibull distribution is commonly thought to result from sequential fragmentation 51 , and it is widely used in fracture theory 46,48,[51][52][53] . In addition, such distribution is often used to describe the particle distribution that is derived from grinding experiments 54 , i.e., a multiple-event fragmentation. ...

Asteroids smaller than 10 km are thought to be rubble piles formed from the reaccumulation of fragments produced in the catastrophic disruption of parent bodies. Ground-based observations reveal that some of these asteroids are today binary systems, in which a smaller secondary orbits a larger primary asteroid. However, how these asteroids became binary systems remains unclear. Here, we report the analysis of boulders on the surface of the stony asteroid (65803) Didymos and its moonlet, Dimorphos, from data collected by the NASA DART mission. The size-frequency distribution of boulders larger than 5 m on Dimorphos and larger than 22.8 m on Didymos confirms that both asteroids are piles of fragments produced in the catastrophic disruption of their progenitors. Dimorphos boulders smaller than 5 m have size best-fit by a Weibull distribution, which we attribute to a multi-phase fragmentation process either occurring during coalescence or during surface evolution. The density per km² of Dimorphos boulders ≥1 m is 2.3x with respect to the one obtained for (101955) Bennu, while it is 3.0x with respect to (162173) Ryugu. Such values increase once Dimorphos boulders ≥5 m are compared with Bennu (3.5x), Ryugu (3.9x) and (25143) Itokawa (5.1x). This is of interest in the context of asteroid studies because it means that contrarily to the single bodies visited so far, binary systems might be affected by subsequential fragmentation processes that largely increase their block density per km². Direct comparison between the surface distribution and shapes of the boulders on Didymos and Dimorphos suggest that the latter inherited its material from the former. This finding supports the hypothesis that some asteroid binary systems form through the spin up and mass shedding of a fraction of the primary asteroid.

... where n is the distribution shape parameter, θ represents a thickness scale (typically expressed in cm), and λ represents the characteristic decay length scale of deposit thinning (typically expressed in km). By varying the value of the shape parameter n, the Weibull distribution can reproduce or approximate a wide variety of distributions, ranging from the exponential distribution for n = 1, to the Rayleigh distribution for n = 2, to the normal distribution for n between 3 and 4, and to the log-normal distribution for various values of n (Brown and Wohletz 1995). However, empirical evidence suggests that, for volcanic eruptions, values of n between 0.2 and 2 are appropriate (Bonadonna and Costa 2013). ...

A new method for assessing volumes of tephra deposits based on only two thickness data is presented. It is based on the assumptions of elliptical shape for isopachs, a statistical characterization of their eccentricity, and an empirical relationship between their deposit thinning length scale and volumes. The method can be applied if the pair of thickness data are sufficiently distant from the volcano source, with a minimum distance ratio larger than 2. The method was tested against about 40 published volumes, from both equatorial belt and mid-latitude volcanoes. The results are statistically consistent with the published results, demonstrating the usefulness of the method. When applied in forward, the model allowed us to calculate the volume for some important tephra layers in the Mediterranean tephrostratigraphy, providing, for the first time, an assessment of the size of these eruptions or layers.

... We use the method described by ref. 24 to calculate the surface macroporosity from the SFD of boulders at the impact location 1 and the global boulder SFD on the illuminated side of Dimorphos 19 (Extended Data Fig. 4a). We used a cumulative Weibull (Rosin-Rammler) distribution 24,57,58 to represent the two SFDs: ...

On 26 September 2022, NASA’s Double Asteroid Redirection Test (DART) mission successfully impacted Dimorphos, the natural satellite of the binary near-Earth asteroid (65803) Didymos. Numerical simulations of the impact provide a means to find the surface material properties and structures of the target that are consistent with the observed momentum deflection efficiency, ejecta cone geometry and ejected mass. Our simulation that best matches the observations indicates that Dimorphos is weak, with a cohesive strength of less than a few pascals, like asteroids (162173) Ryugu and (101955) Bennu. We find that the bulk density of Dimorphos ρB is lower than ~2,400 kg m⁻³ and that it has a low volume fraction of boulders (≲40 vol%) on the surface and in the shallow subsurface, which are consistent with data measured by the DART experiment. These findings suggest that Dimorphos is a rubble pile that might have formed through rotational mass shedding and reaccumulation from Didymos. Our simulations indicate that the DART impact caused global deformation and resurfacing of Dimorphos. ESA’s upcoming Hera mission may find a reshaped asteroid rather than a well-defined crater.

... This distribution was introduced in [14] in order to describe the particles size distribution (crushing particles), a matter addressed later in several papers [15,16]. Also it can be found a physically based derivation of the Weibull distribution inside a fragmentation processes context in [17]. ...

It is shown that distribution of PM2.5 concentration recorded by eight air quality monitoring stations during 2021, covering a large part of the Santiago metropolitan region of Chile, can be explained by a q-Weibull function, which has been related in some papers to complex and non-equilibrium statistical systems. It was found that this function has an excellent fit performance on non-filtered data, which cannot be reached by other physical-statistical functions widely used to modelling air pollution distribution in the literature. Finally, relevant interpretations from the fit parameters are shown as well.

... The evolutionary stable strategy (ESS) is a strategy that can withstand evolutionary selection pressure (Brown & Wohletz, 1995). The criterion of evolutionary stability has notable points. ...

A sustainable transport infrastructure is one of the pillars of a sustainable city. However, the literature indicates that urbanization, population growth, changes in population density, and motorization make it difficult for the current road transport system to meet mobility needs for a sustainable city. Traffic crashes and congestion on roads are common as a result of increasing travel times, fuel consumption, and carbon emissions, thereby reducing efficiency and sustainability of mobility systems. Managing these issues involves the interaction of multiple decision-makers, such as vehicles, pedestrians, traffic system operators, and authorities. Accordingly, these are well-suited to being analyzed under the guise of game theory. While classical game theory possesses multiple limitations, it can be argued that evolutionary game theory (EGT) models are more effective for real-world scenarios. This manuscript presents a state-of-the-art review on EGT applied to the road transportation network. The manuscript has divided the application of EGT in advancing the transportation network into multiple categories, i.e., choice-based analysis, traffic management, behavioral interactions, routing operation, and transport safety. This manuscript provides an in-depth analysis and a comparative criticism of the various proposed evolutionary game models. Finally, the manuscript discusses the challenges and provides recommendations for future research on evolutionary game models in transportation networks. These insights aim to facilitate targeted activities based on current research needs.

... The characteristic of bottom-blowing bubbles was controled by rosin-rammler method. 27) The diameter of the bubbles is in the region of 10-20 mm, which has an mean value of 15 mm. The PISO (Press Implicit with Splitting of Operators) algorithm was used as the pressure-velocity coupling method. ...

For this article, the bottom regiments distribution is designed to investigate the instantaneous kinetic energy of molten pool with the help of numerical simulation. The results indicate that the time required for kinetic energy to reach the stable state is affected by the angle among regiments. When the blowing gas flowrate is 720 Nm³·h⁻¹, the shorter equilibrium time is 66 s with the angle of 15°. Meanwhile, the higher kinetic energy of 2915 J has been discovered as the outer regiments kept at a pitch circle diameter ratio of 0.5, which reflects the preferable internal dynamic conditions of liquid steel. While the molten pool can get enough stirring force, and the outer cluster should not be arranged closely to the furnace wall to avoid unnecessary energy dissipation. Besides, the weak stirring zones inside the bath can be reduced by the elevation of blowing gas intensity, and the proportions of the proper scheme are 0.84, 0.74, 0.69, and 0.67 under four diverse operation conditions, respectively.
Fullsize Image

... The Rosin-Rammler (R-R) distribution is widely used in the literature to model the particle size distribution of dust that results from the crushing of solids such as coal, rocks, and other materials and has been empirically validated numerous times [41]. The authors in [30] use the R-R distribution in their calculations of the extinction coefficient of coal dust in underground mines. ...

Reliable wireless communications are crucial for ensuring workers' safety in underground tunnels and mines. Visible light communications (VLC) have been proposed as auxiliary systems for short-range wireless communications in underground environments due to their seamless availability, immunity to electromagnetic interference, and illumination capabilities. Although multiple VLC channel models have been proposed for underground mines (UM) so far, none of these models have considered the wavelength dependence of the underground mining VLC channel (UM-VLC). In this paper, we propose a single-input, single-output (SISO), wavelength-dependent UM-VLC channel model considering the wavelength dependence of the light source, reflections, light scattering, and the attenuation due to dust and the photodetector. Since wavelength dependence allows us to model VLC systems more accurately with color-based modulation, such as color-shift keying (CSK), we also propose a wavelength-dependent CSK-based UM-VLC channel model. We define a simulation scenario in an underground mine roadway and calculate the received power, channel impulse response (CIR), signal-to-noise ratio (SNR), signal-to-interference ratio (SIR), root mean square (RMS) delay, and bit error rate (BER). For comparison, we also calculate these parameters for a monochromatic state-of-the-art UM-VLC channel and use it as a reference channel. We find that the inclusion of wavelength-dependency in CSK-based UM-VLC systems plays a significant role in their performance, introducing color distortion that the color calibration algorithm defined in the IEEE 802.15.7 VLC standard finds harder to revert than the linear color distortion induced by monochromatic CSK channels.

... Mott (1947) proposed the first classical dynamic fragmentation model, assuming the fragmentation as a process of random fractures, and concluded that the resulting fragment sizes were controlled by the momentum diffusion (Mott model) from each crack. Based on the Mott model, the fragmentation process and its fragmentation size distribution have been extensively investigated and described based on the geometrical/ mathematical random statistical distribution functions, such as logarithmico-normal (lognormal) distribution (Epstein, 1947), Linenau distribution (Lienau, 1936), and Weibull distribution (Brown and Wohletz, 1995). These statistical distribution analyses described the experimental results very well. ...

The grain size effect on the shaped charge jet (SCJ) stretching process was analytically formulated and experimentally verified by penetration tests. The present analytical model predicts an optimum grain size for the SCJ performance, deduced from the concurrent effect of grain size on flow stress, strain rate sensitivity, and surface roughness. Specifically, reducing the grain size will improve the initial surface roughness and decrease the initial perturbation amplitude, favoring the SCJ stretching. On the other hand, the strain rate sensitivity and flow stress for copper increase with the decrease of grain size, facilitating the perturbation growth and leading to a premature breakup. Thus, the present analytical model predicts that the optimum grain size of the SCJ is about 1–5 μm. The penetration test verified that the shaped charge liner with an average grain size of about 3.6 ± 2.5 μm exhibited the largest penetration depth. The consistent results from the analytical model and the penetration experiments certify the feasibility of the present analytical model on the SCJ performance.

... Since then, dynamic fragment characterisation has been a subject of considerable research interest, and researchers have used a variety of statistical distributions in evaluating average fragment size. Some of the common statistical distributions used are: exponential (Grady and Kipp 1985), log-normal (Ishii and Matsushita 1992), power-law (Oddershede et al. 1993), Weibull (Brown and Wohletz 1995), Swebrec (Ouchterlony 2005) and Gilvarry (Sil'vestrov 2004). Another group of researchers have developed models based on principles of energy balance (Glenn and Chudnovsky 1986;Grady 1982;Yew and Taylor 1994). ...

The aim of this study is to understand the strength behaviour and fragment size of rocks during indirect, quasi-static and dynamic tensile tests. Four rocks with different lithological characteristics, namely: basalt, granite, sandstone, and marble were selected for this study. Brazilian disc experiments were performed over a range of strain rates from ~ 10–5 /s to 2.7 × 10¹ /s using a hydraulic loading frame and a split Hopkinson bar. Over the range of strain rates, our measurements of dynamic strength increase are in good agreement with the universal theoretical scaling relationship of (Kimberley et al., Acta Mater 61:3509–3521, 2013). Dynamic fragmentation during split tension mode failure has received little attention, and in the present study, we determine the fragment size distribution based on the experimentally fragmented specimens. The fragments fall into two distinct groups based on the nature of failure: coarser primary fragments, and finer secondary fragments. The degree of fragmentation is assessed in terms of characteristic strain rate and is compared with existing theoretical tensile fragmentation models. The average size of the secondary fragments has a strong strain rate dependency over the entire testing range, while the primary fragment size is less sensitive at lower strain rates. Marble and sandstone are found to generate more pulverised secondary debris when compared to basalt and granite. Furthermore, the mean fragment sizes of primary and secondary fragments are well described by a power-law function of strain rate.

... Weibull distribution finds its application in the system where dynamical evolution is driven by fragmentation and sequential branching [28,29]. Since the evolution of the system in hadrons and heavy-ion collision is dominated by a perturbative QCD-based parton cascade model, we can apply q-Weibull distribution to study particle spectra. ...

Transverse momentum, p T , spectra are of prime importance in order to extract crucial information about the evolution dynamics of the system of particles produced in the collider experiments. In this work, the transverse momentum spectra of charged hadrons produced in P b P b collision at 5.02 TeV have been analyzed using different distribution functions in order to gain strong insight into the information that can be extracted from the spectra. We have also discussed the applicability of the unified statistical framework on the spectra of charged hadron at 5.02 TeV

... FSD in case G 1 S 2 C 1 also fits well with Weibull distribution with a shape factor of 2.5. Brown and Wohletz (1995) showed that at late stages, when the breakup and flocculation processes balance each other, the Weibull distribution is the outcome for a power-law breakup of a large floc into smaller particles. Our results suggest that fragmentation and reflocculation occur constantly at the equilibrium stage. ...

A two‐phase Euler‐Lagrangian framework was implemented to investigate the flocculation dynamics of cohesive sediment in isotropic turbulence. The primary particles are modeled as sticky soft spheres using the discrete element method (DEM). The attractive van der Waals forces are modeled by the DLVO theory and the JKR adhesive contact theory. The near steady state equilibrium floc size distribution (FSD) strongly depends on the ratio of the turbulent shear to the floc strength (or particle stickiness). When turbulence is strong, a single peak around the Kolmogorov length scale appears in the FSD, and the distribution fits the Weibull distribution well. A power‐law floc size distribution develops when the floc strength is greater than the destabilizing effect of turbulent shear. Sediment concentration does not significantly affect the shape of FSD or the average floc size. The average apparent floc settling velocity Ws increases with the average floc size. Fractal dimension of flocs decreases with the floc size following a power‐law relation for large flocs. Settling velocity of flocs as a function of floc size also follows a power‐law relation. Deviation from the power‐law relationship is found for large flocs because of their porous nature. At equilibrium stage, the construction by aggregation is balanced with the destruction by breakup, and the construction by breakup is balanced with the destruction by aggregation. The aggregation kernel by turbulent shear and power‐law breakup kernel can describe the dynamics reasonably well for the flocculation of cohesive sediment in homogeneous isotropic turbulence.

... It is worth noting that this distribution rightly bears Weibull's name, but it was independently described by Frechet, and Rosin, Rammler and Sperling (Brown and Wohletz 1995;Cook and DelRio 2019). The same probability distribution can also be derived from models other than the weakest link theory, such as the shot-noise model, the hazard rate approach, and the broken-stick model (Rinne 2008). ...

... Initially, grain size distributions (GSDs) were fitted empirically using log-normal, Rosin-Rammler and Weibull distributions (Brown and Wohletz 1995;Spieler et al. 2003;Mackaman-Lofland et al. 2014). The sequential fragmentation theory (SFT ;Brown 1989;Wohletz et al. 1989) and the application of fractal theory to size distributions of rock fragments, soils as well as natural and experimental pyroclasts distributions (Turcotte 1986;Perfect 1997;Perugini and Kueppers 2012) attempted to overcome this empiricism by providing a more physical basis for these distributions. ...

... This method, however, assumes that the GSD follows a log-normal distribution, in other words the GSD is normally distributed on the φ-scale. Alternatively, the GSD can be described using a Weibull or Rosin-Rammler distribution (Rosin and Rammler, 1933;Weibull, 1951;Brown and Wohletz, 1995) from which shape and scale parameters can be described ...

Around once every millenium, a large magnitude explosive eruption occurs on Earth dispersing volcanic ash across millions of square kilometers. Volcanic ash uniquely poses a wide-range of hazards to human health, infrastructure and the environment, the impacts of which are felt close to source to >1000’s of km from the volcano. Importantly for large eruptions, the removal of the ash is almost impossible, which means the material remains and is remobilised in the environment, posing secondary hazards for 100’s of years after the initial eruption.
Here I present a study of the processes the produce, transport and deposit ash from large eruptions. I use the ~7.7 ka climactic eruption of Mount Mazama as a case study because the tephra was predominantly deposited on-land facilitating widespread data collection. I collate locations where the Mazama tephra has been recorded to produce a new isopach map and estimate of the total erupted volume (176 km^3 bulk or 61 km^3 Dense-Rock-Equivalent). The compilation of tephra thickness data also showed how the Mazama tephra deposit has been remobilised through time, exemplifying the uncertainties associated with field data. Remobilised deposits also provide insight into the types of secondary ash hazards that persist following large magnitude eruptions.
I also investigate the physical and chemical properties of Mazama ash to provide insight into eruptive processes such as co-PDC plumes and distal ash transport. I determine from the composition of Fe-Ti oxides, that the distal ash can be attributed to the later stages of the climactic Mazama eruption. I also observe that the Grain Size Distribution (GSD) of the distal Mazama tephra is remarkably stable, a trend that is observed for other large distal deposits.
This study also investigates the methods we use to analyse grain size in volcanology and outlines a new protocol for measuring the size and shape of volcanic ash using Dynamic Image Analysis (DIA). The benefits of DIA include the capacity for simultaneous particle size and shape characterisation, and the insight into particle density if used in parallel with sieve analysis.
The new estimate of the total erupted volume and distal GSD of the Mazama were integrated with Ash3D, a numerical model of volcanic ash transport and deposition, to simulate the eruption and test the sensitivity of Ash3D to uncertainty in the eruption source parameters. The results stress the need to integrate radial spreading in the umbrella cloud region with advection-diffusion models when simulating the ash transport during large magnitude eruptions. Furthermore, it highlights significant knowledge gaps regarding the deposition of very fine-ash during any scale of eruption. This underscores the benefits of studying fine-ash deposition using the deposits from large eruptions where significant depositional areas and ash volumes facilitate extensive data collection and model testing.

... We consider a particle bed consisting of spherical rigid particles. The particle-size distribution is assumed to follow a Weibull distribution, as frequently found in several powder-based industrial applications [6]. The radii r = d/2 distribution is characterized by a shape parameter a and scale parameter b. ...

Caking in amorphous powders compromises their quality during storage. Individual particles absorb water vapor, which changes their viscosity and promotes the formation of sinter bridges. Lumps of particles grow and eventually span the whole powder, affecting the mechanical properties and quality of the powder. Previous studies of the caking dynamics largely neglect the role of spatial heterogeneities in the particle-size distribution. We perform particle-based simulations and show that, if caking is mapped into a percolation transition, the role of spatial heterogeneities is well captured by the corresponding percolation threshold. Since this threshold only depends on the geometry of the granular assembly, we can separate the contribution of the spatial heterogeneities and of the individual particle properties. This enables a rational approach for interpreting and mitigating caking propensity of commercial products consisting of particle species with different particle size distributions. We corroborate the numerical and analytical predictions with experiments.

... Other similar models are by Steacy and Sammis (1991) and Palmer and Sanderson (1991). A different model is by Brown and Wohletz (1995) who propose a self-similar model of breakage (Fig. 1e) and arrive at a Weibull distribution; this approach is a sub-set of the Filippov approach. ...

The production of breccias and cataclasites is commonly proposed to result in power-law or log-normal probability distributions for fragment (grain) size. We show that in both natural and experimental examples, the common best fit probability distributions for the complete distributions are members of the Generalised Gamma (GG), Extreme Value (GEV) and Pareto (GP) families; power-law and log-normal distributions are commonly, but not always, poor fits to the data. An hierarchical sequence, GG → GEV → GP, emerges as the sample mean of the fragment size decreases. The physical foundations (self-similar fragmentation, collisional fragmentation, shattering) for these distributions are discussed. Particularly important is the shattering continuous phase transition that results in the simultaneous development of both coarse fragments and ultra-fine particles (dust). This phase transition leads to Generalised Pareto fragment size distributions for the coarse fragments. Also included is a discussion of the relations between fragment size distribution, processes and deformation history in the context of monomineralic rocks. The overall reported size distributions are compatible with theoretical developments but the topic would benefit from observations and experiments conducted with the theories in mind.

As the intensity and depth of coal mining grow year by year, coal seam gas pressure increases and stope structures become more complex, which can easily cause coal and gas outburst. During the process of coal and gas outburst, a large amount of coal is broken and ejected, seriously threatening the safety of workers and coal mine production. Therefore, a multifunctional coal and gas outburst physical simulation test system was used to carry out three outburst tests under different gas pressures to study the particle size distributions and fragmentation characteristics of the ejected coal. The results showed that the relative intensity of outburst increased with gas pressure, but the increase rate decreased. Gas pressure also played a role in promoting the coal crushing. For the crushing product, the R–R (Rosin–Rammler) distribution model with high COD (coefficient of determination) was used to calculate the comminution energy at 0.35 MPa, while the fractal distribution model with high COD was used at 0.85 MPa and 2.00 MPa. When gas pressure increased, the basic shape of the R–R model curve remained unchanged, the probability density curve of fractal model changed from concave to nearly straight and then to convex and the basic shape of the cumulative distribution curve of fractal model remained constant. The values of α (uniformity coefficient) and xe (characteristic particle size) impacted on the R–R model and the values of Df (fractal dimension) and xmax (maximum particle size) impacted on the fractal model. Within a certain error range, the comminution energy could be approximated. The comminution energy increased with gas pressure, and the potential energy of crushing product decreased with the value of the n related to the crushing mechanism. There was a strong linear relationship between relative intensity of outburst and comminution coefficient. The combination of experiments and machine learning provided a new direction for outburst prediction and prevention at coal mine sites.

Relevance. Caused by the need to develop and optimize the mathematical apparatus for processing the results of laboratory experiments and increasing the adequacy of the results obtained. Aim. To create alternative methods for finding the parameters of the Szyszkowski and Rosin–Rammler dependencies, which are subject to surfactant adsorption from an aqueous solution on solid adsorbents and deposition of suspended particles in sedimentation analysis. Methods. The main method for determining the parameters of two-parameter dependencies is the least squares method. The standard approach is based on finding the minimum of a function of two variables by computational methods of nonlinear programming. The equations, obtained by equating the derivatives of the objective function for each of the parameters to zero, are used as necessary conditions for the minimum of the objective function. The paper considers alternative approaches to obtaining explicit formulas and reduction to the solution of the transcendental equation. Results. For the two-parameter dependencies of Szyszkowski and Rosin–Rammler, the alternative approaches for determining unknown parameters are proposed. In the standard approach, solving the problem is based on numerical minimization of a function of two variables by nonlinear programming methods. The authors propose the approach, in which the Szyszkowski and Rosin–Rammler equations are subjected to some equivalent transformations so that the use of the necessary minimum conditions makes it possible to obtain a linear equation with respect to at least one of the required parameters. This leads to simplification of calculations, it is required to solve one transcendental equation numerically, the second parameter is then determined by an explicit formula. And for the Rosin–Rammler dependence, in one of the proposed variants, it was possible to obtain explicit formulas for finding both parameters.

The scalar dispersion of a ground-level point-source plume in a smooth-wall turbulent boundary layer is experimentally investigated using simultaneous particle image velocimetry and planar laser-induced fluorescence techniques. In the near-source region, the viscous sublayer is observed to trap dye, while in the far field, the half-width, vertical profiles and peak decay of the mean concentration and concentration variance exhibit self-similar behaviour and collapse with empirical relations. Full two-dimensional maps of the turbulent scalar fluxes show a net transport direction of upward and towards the incoming flow, with the vertical profiles collapsing well with Weibull-type exponential functions and the decay of peaks following power laws. Using the first-order gradient transport to model the turbulent scalar fluxes, maps of the anisotropic turbulent diffusivity tensor and an effective turbulent diffusivity coefficient are calculated. The streamwise and wall-normal turbulent scalar fluxes are driven dominantly by the wall-normal concentration gradient. The turbulent Schmidt number, relating the turbulent diffusivity and the turbulent (eddy) viscosity calculated using the Boussinesq hypothesis, varies with wall-normal position with values of the order of unity in the logarithmic layer.

The importance of particles as structural components in a range of engineering applications is described, providing an introduction and motivations for the work to follow. The geometrical aspects of particles in the load‐bearing configuration of diametral compression are defined and used in a description of the observed failure modes of loaded particles. Images of particles commonly and not‐so‐commonly encountered in everyday life are presented and discussed. The geometry of particle loading is followed by an analysis of particle shape. A discrete Fourier analysis method is used to describe a range of particle shapes—multi‐lobed, rough, angular, or variable—and the effects of particle shape on estimates of particle size are considered. The similarities of the analyses of particle shape and the following analyses of particle strength are discussed.

This chapter describes in clear, simple operational terms the numerical basis of how the characteristics of strength distributions (which are what is measured) can be used to infer the characteristics of populations of strength‐controlling flaws (which are usually not observed) through the choice of sample and component sizes (which are what can be controlled). This is reverse analysis and is focused on the answering the question “what can we learn?” This chapter is practical and the examples near‐ideal as they will be selected from published works to illustrate specific points. The terms and inter‐relationships defined in Chapter will be used extensively. The analysis is applicable to all brittle components.

A partir del último estudio regional que se realizó en Centroamérica por parte de Benito et al. (2012) en el marco del proyecto RESIS II y la actualización de zonas sismogénicas para Centroamérica por Alvarado et al. (2017), se ha incorporado nueva información en el catálogo regional para Centroamérica hasta el año 2020. Para esto, se han recopilado los catálogos sísmicos de 14 fuentes de información, tanto locales como regional, con el propósito de obtener un catálogo lo suficientemente denso para su posterior uso en los trabajos del cálculo de la peligrosidad sísmica en Centroamérica en el marco del proyecto KUK-AHPAN.

El Rincón de la Vieja (9°58’36.16’’N y 83°51’11.27’’W) es el único volcán activo de la cordillera de Guanacaste con una elevación 1916 m s. n. m. y una altura de 1500 m. Constituye un macizo estratovolcánico complejo con una docena de cráteres y conos cuspidales, así como una caldera de avalancha. Su edificio volcánico cubre un área de 400 km2 para un volumen de 118 km3 y una edad ≥ 564 000 años (= 564 ka), el cual creció posterior al colapso de al menos dos calderas de subsidencia con ignimbritas asociadas. El Rincón de la Vieja posee un registro de depósitos de tefras, coladas de lava, lahares del Holoceno relativamente bien estudiados, así como erupciones históricas freatomagmáticas y freáticas (surtseyanas y freatosurtseyanas) con actividad estromboliana subordinada. Las erupciones históricas hasta el momento han sido pequeñas (VEI 0-3), con pocos efectos en la población, la economía pecuaria (ganadería de leche), la agricultura y el turismo y ninguno sobre las plantas geotérmicas y producción de electricidad. En el presente informe se actualizan y se analizan los peligros volcánicos (directos e indirectos). Las consecuencias más probables a mediano plazo son: en el campo proximal (< 10 km de radio), la caída de tefras, los gases, la lluvia ácida, los aerosoles; la balística y las corrientes de densidad piroclástica han estado restringidas en las erupciones históricas a ≤ 2 km. En la parte media (< 10 km), los lahares (calientes sineruptivos y pos-eruptivos) suelen estar asociados con las erupciones, discurriendo principalmente sobre su flanco N; mientras que la lluvia ácida y las cenizas continúan siendo un peligro inherente, particularmente en su eje de dispersión W y SW. Se han registrado varios eventos prehistóricos de coladas de lava, el último relevante hace unos 5 ka y al menos uno de colapso sectorial de edad no establecida. La peligrosidad por la formación de conos piroclásticos y coladas de lava es muy baja. Sobresale una erupción pliniana del año 300 d. C. con importantes efectos por la caída de pómez hacia el WSW y de flujos piroclásticos hacia el flanco N con recorridos de al menos 12 km. Dado que ha transcurrido más de un mileno desde este evento eruptivo relevante, su probabilidad es baja. Los lahares (volcánicos, cosísmicos y secundarios) y los flujos piroclásticos representan un peligro elevado, particularmente para los turistas ilegales que ascienden al cráter Activo y, en relativo menor grado, a los pobladores del flanco N. Con base en los eventos eruptivos (mejor registro para los últimos 6000 años) se puede inferir la ocurrencia de un evento eruptivo importante (índice de explosividad volcánica con sus siglas en inglés: VEI 3-4) quizás a mediados o finales del presente siglo y uno mayor en las próximas centurias (VEI 5). Dichos registros permiten analizar las posibles amenazas y riesgo a la que están expuestas las obras geotérmicas, relacionado con la caída de ceniza, oleadas piroclásticas lahares calientes y eventos más grandes, como lo son las erupciones plinianas y subplinianas, que podrían tener un período de recurrencia de unos 600 años. El peligro y el riesgo asociado para las obras de generación geotérmica es muy bajo, aunque no nulo. Además, el análisis de los flujos de lodos (lahares) con el programa LAHARZ indica que, en general, la peligrosidad para las obras de generación del ICE debido a lahares es muy baja a muy baja, esto en dependencia que se presenten los factores necesarios para que se de este tipo de evento. Una serie de recomendaciones se brindan al final del trabajo, así como una síntesis de los aspectos relacionados con la gestión del riesgo. Se aportan algunas sugerencias para que puedan ser consideradas dentro de los procesos de planificación territorial y regulación del uso del suelo, aspectos turísticos y del manejo e información del Parque Nacional Rincón de la Vieja y sus áreas vecinas.

Phenomenal new observations from Earth-based telescopes and Mars-based orbiters, landers, and rovers have dramatically advanced our understanding of the past environments on Mars. These include the first global-scale infrared and reflectance spectroscopic maps of the surface, leading to the discovery of key minerals indicative of specific past climate conditions; the discovery of large reservoirs of subsurface water ice; and the detailed in situ roving investigations of three new landing sites. This an important, new overview of the compositional and mineralogic properties of Mars since the last major study published in 1992. An exciting resource for all researchers and students in planetary science, astronomy, space exploration, planetary geology, and planetary geochemistry where specialized terms are explained to be easily understood by all who are just entering the field.

Phenomenal new observations from Earth-based telescopes and Mars-based orbiters, landers, and rovers have dramatically advanced our understanding of the past environments on Mars. These include the first global-scale infrared and reflectance spectroscopic maps of the surface, leading to the discovery of key minerals indicative of specific past climate conditions; the discovery of large reservoirs of subsurface water ice; and the detailed in situ roving investigations of three new landing sites. This an important, new overview of the compositional and mineralogic properties of Mars since the last major study published in 1992. An exciting resource for all researchers and students in planetary science, astronomy, space exploration, planetary geology, and planetary geochemistry where specialized terms are explained to be easily understood by all who are just entering the field.

Phenomenal new observations from Earth-based telescopes and Mars-based orbiters, landers, and rovers have dramatically advanced our understanding of the past environments on Mars. These include the first global-scale infrared and reflectance spectroscopic maps of the surface, leading to the discovery of key minerals indicative of specific past climate conditions; the discovery of large reservoirs of subsurface water ice; and the detailed in situ roving investigations of three new landing sites. This an important, new overview of the compositional and mineralogic properties of Mars since the last major study published in 1992. An exciting resource for all researchers and students in planetary science, astronomy, space exploration, planetary geology, and planetary geochemistry where specialized terms are explained to be easily understood by all who are just entering the field.

The well-known statistical distribution function, Weibull distribution function, has been employed to study the multiplicity distribution and multiplicity moments for the backward shower particles produced in case of total disintegrated events of [Formula: see text]O–AgBr, [Formula: see text]Ne–AgBr and [Formula: see text]Si–AgBr interactions at (4.1–4.5)[Formula: see text]AGeV/c. Multiplicity fluctuation of backward shower particles in case of total disintegrated events by the method of scaled variance and the forward–backward asymmetry parameter had also been investigated.

Because slime water grain size variations cannot be accurately assessed in coal preparation plants during production, a method for determining the particle size distribution (PSD) based on Rosin – Rammler–Sperling – Bennett (RRSB) characteristic parameters is proposed. This study performed an industrial real-time sampling of coal slime water at four test points in a coal preparation plant, using an online sampler and a mechanical-pressing particle size measuring device. The measurement accuracy test was confirmed through testing such that production requirements were met. Taking the feeding of the concentration tank as an example, three distribution laws were verified, and the results showed that the PSD fit well with the RRSB distribution function, where the characteristic parameter n was 0.074 and the critical particle size Dewas 0.047 mm. In addition, after changing the parameters through gradation, n reflected the change of the dominant particle size range, while De indicated the direction of the particle size change. Through this successful application, show that employing a set of characteristic parameters based on RRSB PSD – obtained via an online granularity analysis system analyzing slurry particle size trends – has good potential for addressing the problem at hand.

Blast block prediction is a complex non-stationary, nonlinear problem, the contribution of factors affecting results varies for different external conditions. Studies in a single environment are not universally applicable, the establishment of a blasted block size prediction model with fusion of multiple algorithms and reliable prediction results is the most urgent problem to be solved. In this study, a method is proposed that applies to different regions and rock conditions. To achieve the grouping prediction of blasted block size, this study firstly used hierarchical clustering to cluster the data in different areas, then used the random forest to establish the data grouping discriminant model based on the grouping results, and used back propagation neural network with genetic algorithm (GA-BP) to establish the blasted block size prediction model for each group of the blasted block size data separately. The results of the study show that (1) the block size data with different properties can be grouped according to the elastic modulus; (2) the grouping discriminant model established can correctly group the data; (3) compared with the GA-BP neural network prediction model without grouping, this model has a higher coefficient of determination (R2=0.982 ) and smaller root mean square error (RMSE=0.2) and mean relative error (MRE=4.857 ), which verifies the correctness of grouping prediction according to the elastic modulus; (4) by comparing with multiple regression, least squares support vector machines (LSSVM), and BP neural networks, the model outperforms the other models in R2RMSE, and MRE, and the prediction results are more accurate.

The Orbiter High Resolution Camera (OHRC) is a very high spatial resolution panchromatic camera (0.45–0.70 μm) on-board Chandrayaan-2 orbiter. Its spatial resolution of 0.25 m from 100 km altitude is highest among all lunar orbiter missions. A simple crater with substantial boulder population was observed in an OHRC image of a region near Boguslawsky E crater. Boulders are distinctly seen in this image because of high spatial resolution of 0.28 m and low sun elevation angle (6°) which enhanced the boulders' shadows. We have identified and mapped >2000 boulders around this young un-named simple crater (74.9216° S, 54.5148° E). It is observed that the OHRC is capable of extending the lower limit of size for identifiable boulders below 1 m. The distributions of mapped boulders are studied and compared with previous studies. It was found that the coefficient values estimated by fitting power laws to various distributions, such as size-frequency, size-range, etc., are well within the ranges reported in literature for craters distributed on lunar surface around the landing sites. Weibull distribution was also fit to the data, and the fitting coefficients were compared with the values obtained in similar studies. The crater age was estimated to be in the range of 50–90 Ma using empirical relations, and comparison with areal density of other craters near lunar landing sites. This study also provides a glimpse of the low-light imaging capability of the OHRC showing inside the shadow regions, which were illuminated by reflected light from adjoining areas.

The paper describes an experimental study of the influence of the composition and mechanical properties (ultimate and yield strength, reduction of area, impact toughness, and the fraction of fibrous fracture) of brittle, quasi-brittle, and ductile steels on the parameters of dynamic fragmentation and fracture mechanisms. The experiment showed that simple exponential functions could be used to describe cumulative mass distributions of shell fragments from the materials under study. The exponent of these functions is the reciprocal characteristic mass (1/μ) and it decreases as the fraction of fibrous fracture of shell fragments increases. It serves as a dynamic fracture criterion that is sensitive to changes in the composition and mechanical properties of the material. The relations between the fragmentation parameter 1/μ and mechanical properties and composition of shell materials were obtained. Reaching the critical value of the fragmentation criterion (1/μc) leads to changing the fracture mechanism from “ductile” to “brittle” mode which arises in a material with ∼40% of fibrous fracture at the fracture surface of impact specimens and ∼50% of shear fracture of fragments.
The authors studied the structure and features of ductile and brittle fracture surfaces of shell fragments, compared the statistical distributions widely used to describe the fragmentation process (Mott, Weibull, and Grady-Odintsov) and the dependencies of their parameters on material properties. The study did not reveal any advantages of the parameters of these distributions over the 1/μ parameter of the exponential function. The analysis of the literature data on the fragmentation of shells of different steels showed that they did not contradict the obtained results, and the usage of simple exponential distributions makes it possible to reveal the physical meaning of parameters of statistical distributions. Several common patterns of fragmentation of solids are discussed.

Random breakage can be defined as the breakage patterns independent from the stressing environment and the nature of the broken particle. However, the relevant literature studies give contrary evidence against the random breakage of particles. A simple way to detect random breakage is to evaluate the fragment (progeny) size distributions. Such distributions are estimated analytically or through numerical models. The latter models generally treat random breakage as a geometric statistical problem with prior assumptions on particle/flaw geometry and external stressing environment, which may violate the randomness of the breakage process. This study presents a random-breakage algorithm that does not require such assumptions. The simulated progeny size distributions were compared with the experimental size distributions by impact loading (drop-weight) tests. Random breakage events should yield number-weighted size distributions that is fitted well to the lognormal distribution function. Also, a mass-weighted (sieve) size distribution function is presented for random breakage. Nevertheless, the results refute the random breakage of clinker and other brittle particles after impact loading. Instead, the sieve size distribution of fragments may evolve due to crack branching/merging and Poissonian crack nucleation processes.

A study of the distribution of the value of traded goods under the Harmonized System is presented. The ramifications of this classification system are found to exhibit an approximate power law decay, indicating complexity and self-organization in the nomenclature of traded merchandises. For almost all countries with available data, log-values of annually imported and exported goods are well described by three-parameter Weibull distributions. This distribution commonly appears in particles size distributions, suggesting a connection between random fragmentation processes and the mechanisms behind the international trade of merchandises. Analysis of the resulting values for the fitting parameters from 1995 to 2018 shows a nearly constant linear relationship between the parameters of the Weibull distributions, so that, for each country, the distribution of log-values can be approximately characterized by a single shape parameter [Formula: see text]. The empirical findings of this paper suggest that specialization on trading a constant set of goods prevents the values of all traded merchandises from growing/decreasing simultaneously.

The assumption that distributions of mass versus size interval for fragmented materials fit the log normal distribution is empirically based and has historical roots in the late 19th century. Other often used distributions (e.g., Rosin-Rammler, Weibull) are also empirical and have the general form for mass per size interval: {ital n}({ital l})={ital kl}{sup Î±} exp(-{ital l}Î²), where {ital n}({ital l}) represents the number of particles of diameter {ital l}, {ital l} is the normalized particle diameter, and {ital k}, Î±, and Î² are constants. We describe and extend the sequential fragmentation distribution to include transport effects upon observed volcanic ash size distributions. The sequential fragmentation/transport (SFT) distribution is also of the above mathematical form, but it has a physical basis rather than empirical. The SFT model applies to a particle-mass distribution formed by a sequence of fragmentation (comminution) and transport (size sorting) events acting upon an initial mass {ital m}â²: {ital n}({ital x}, {ital m})={ital C} â«â« {ital n}({ital x}â², {ital m}â²){ital p}(Î¾) {ital dx}â² {ital dm}â², where {ital x}â² denotes spatial location along a linear axis, {ital C} is a constant, and integration is performed over distance from an origin to the sample location and mass limits from 0 to {ital m}.

An analysis of fragmentation due to dynamic stress loading is presented which provides analytic functions for the distributions in fragment sizes. The analysis is restricted to one-dimensional bodies under uniform tensile loading. Concepts of survival statistics are used to account for spatially random fracture nucleation. Fragment size distribution curves for both brittle and ductile fracture are derived, and the curve for the latter is compared with experimental data. Fragment distribution curves are shown to depend on both material deformation properties and loading conditions.

Suppose we have before us a large number of measurements. They may either be all approximations to the true value of a single unknown quantity, or may refer to the several members of a large class. The measurements will disagree among themselves, but on arranging them in order of size they show a tendency to cluster round some medium value. We are naturally inclined to infer that the true value of the unknown, or typical member of the class, is not far from this value. How to define and determine the appropriate medium in various classes of measurement becomes thus a natural object of inquiry. On examination we find that there is no strict and final criterion applicable to all cases.

Carbon black exists as primary aggregates composed of primary particles fused together. We have measured the size, anisometry and bulkincss of these primary aggregates by electron microscopy. These parameters, which define ‘structure’, show a wide spread within a given sample. Weight-average parameters are calculated with the aid of a relation between projected area and number of particles per aggregate, based on computer-simulated flocs. As expected, the average values for all three parameters are high for acetylene black and low for thermal black. Furnace blacks give intermediate values, with aggregate size being the predominant parameter related to ‘structure’, at comparable levels of particle size.

The structure of soot agglomerates formed by the combustion of acetylene in a coannular diffusion burner is studied. Structural data from electron micrographs were obtained by two methods, particle counting with the aid of stereopairs for small clusters and electronic digitization with high-resolution image processing, used for the larger agglomerates. Langevin dynamics computer simulations based on free molecular motion were performed as an aid to interpreting the experimental results.

This paper discusses the applicability of statistics to a wide field of problems. Examples of simple and complex distributions are given.

Zerkleinern ist ein statistischer Vorgang; die Kornzusammensetzung des Mahlgutes ist daher mit den Verfahrensweisen der mathematischen
Statistik beschreibbar. Nachdem die Übertragung der Begriffe und Abbildungsweisen der Kollektivlehre auf das Kollektiv-„Mahlgut“
erörtert worden ist, werden die typischen Kornverteilungskurven des Mahlgutes von grober Zerkleinerung in Backenbrechern und
Walzwerken über die mittelfeine Zerkleinerung in Schleudermühlen bis zur Feinstmahlung in Kugelmühlen behandelt, und es werden
die analytischen Gleichungen für die einzelnen Kurventypen angegeben.

A theory of sequential fragmentation is presented that describes a cascade of fragmentation and refragmentation,i.e., continued comminution. It is shown that the theory reproduces one of the two major empirical descriptors that have traditionally
been used to describe the mass distributions from fragmentation experiments. Additional experimental evidence is presented
to further validate the theory, and includes explosive aerosolization, grinding in a ball mill, and simulated volcanic action.
Also presented are some astronomical applications of the theory including infalling extraterrestrial material, siderophile
concentrations in black magnetic spherules of possible meteoritic origin, the asteroids, the distribution of galactic masses,
and the initial mass function of stars

The theoretical results of Gilvarry for the size distribution of the fragments in single fracture have been verified experimentally by fracturing spherical glass specimens under compression. The fragments were contained by a gelatin matrix to inhibit secondary fracture and thus make conditions conform as closely as possible to single fracture. Experimental values of the probability of fracture as obtained by sieve analysis show the predicted linear variation with the mean dimension x of the particles, over reasonably large intermediate ranges of the variables. It is shown that a logarithmic‐normal distribution does not represent the experimental results. The over‐all data exhibit three local maxima in the differential probability of fracture as a function of x, whereas the theory permits only two. Agreement in the number of peaks is obtained by subtracting the contribution to the over‐all probability of those fragments containing original surface of the specimen, which yields the true probability considered in the theory. In this manner, reasonably complete agreement between theory and experiment for single fracture is obtained. For plural fracture (carried out without use of gelatin), two additional peaks exist in the curve of the over‐all differential probability vs x, as compared to the case for single fracture. The theory of Gilvarry is confirmed down to a fragment dimension of at least 1 μ by means of an electrical counting instrument, and checked by direct microscopic sizing to 5 μ. The results yield numerical values of internal flaw densities, and thus provide a tool to study the distribution of Griffith flaws existing internally in a solid.

Analytical solutions to the fragmentation equation are presented for specific rates of breakage and primary breakage distribution functions often used to correlate comminution data. These solutions, obtained by similarity arguments, compare favorably with the experimental data of Austin et al., and suggest that correlation of data via similarity techniques may facilitate determination of function parameters.

A multifluid hydrodynamic approximation allowing for the relative motion along the magnetic field of the newly created ions and the original fluid is used to treat the ion-pickup process. Due to the processes characterized by these means, the ion tail of a comet may not be antisolar; the derivation from radial is anticipated to be largest for oxygen due to its ionization at the greatest distances. Other ions, created nearer the comet where flow speed is lower, should have smaller transverse velocities.