Thoughts Concerning the Origin of Our Fractal Universe

Abstract

During the past few decades, it has become clear that the distribution, sizes, and masses of cosmic structures are best described as fractal rather than homogeneous. This means that an entirely different formalism is needed to replace the standard perturbation model of structure formation. Recently, we have been developing a model of cosmology that accounts for a large number of the observed properties of the universe. A key component of this model is that fractal structures that later regulated the creation of both matter and radiation came into existence during the initial Planck-era inflation. Initially, the vacuum was the only existence and since time, distance, and energy were uncertain, its only property, the curvature (or energy), was most likely distributed randomly. Everything that happened after the Planck era can be described by the known laws of physics so the remaining fundamental problem is to discover how such a random beginning could organize itself into the hierarchy of highly non-random self-similar structures on all length scales that are necessary to explain the existence of all cosmic structures. In this paper, we present a variation of the standard sandpile model that points to a solution. Incidental to our review of the distributions of cosmic structures, we discovered that the apparent transition from a fractal to a homogeneous distribution of structures at a distance of about 150 Mpc is a consequence of the finite size of the universe rather than a change in the underlying statistics of the distributions.

Share and Cite:

Botke, J. (2025) Thoughts Concerning the Origin of Our Fractal Universe. Journal of Modern Physics, 16, 167-197. doi: 10.4236/jmp.2025.161008.

1. Introduction

In this paper, we present a new model of the very beginnings of the universe which shows how fractal structure could have spontaneously evolved out of randomness. In isolation, the results of the paper won’t have much meaning so we will begin with a summary of our theory of cosmology, [1]-[3] which will make the connection between fractal vacuum structures at the time of the initial Planck-era inflation and the later formation and evolution of all cosmic structures. Also, for the benefit of readers not familiar with fractals in general and self-organizing sandpile models in particular, we will summarize these topics in the following few sections.

According to our model, the universe began with a Planck-era inflation during which the only existence was the vacuum (no radiation). This epoch consisted of two phases. The first was an exponential expansion that lasted about 1041 s or about 100 Planck times. During this phase, time and distance were uncertain and causality did not have its normal meaning. We imagine that there was cause and effect relationship between events but that the speed of information transfer was effectively unlimited. This lack of causality is a critical factor in the structure formation process because it is the only means by which structures the size of superclusters could have been formed. This phase was followed by a transition phase during which the exponential expansion slowed to a power law rate proportional to t 1/2 . By a time of about 1040 s, time and distances were a few thousand times larger than the corresponding Planck values. Normal causality was by then in effect with a finite speed of light, and the dynamic of spacetime was henceforth described by Einstein’s equations. The vacuum was on average homogeneous and isotropic but it also manifested a small amplitude imprint on all length scales that would later regulate the formation of all matter and radiation.

At the end of the transition, we move into the Einstein era. In our new view of cosmology, the vacuum curvature varies with time and acts as its own energy source. Combining these ideas with Einstein’s equations results in the following exact solution [1]:

a( t )= a 0 e c 1 ( t t 0 ) γ * e t t 0 c 1 , (1-1)

H( t )= a ˙ ( t ) a( t ) = γ * t + c 1 t 0 , (1-2)

k( t )= 1 2 γ h a ( t ) 2 κ( ρ vac c 2 ( t )+ p vac ( t ) ) , (1-3)

ρ vac c 2 ( t )+ p vac ( t )= 2 k 0 κ a 0 2 γ h ( t 0 ) 2 t 2 , (1-4)

where a( t ) is the scaling, H( t ) is the Hubble parameter, k( t ) is the curvature, and the last is the sum of the vacuum energy and pressure. The numerical values of the constants are k 0 =1.41 , γ * =1/2 , γ h =1/3 , and c 1 =0.53 . Despite appearances, there are only 2 parameters in this model: c1 is fixed by the value of the Hubble constant H 0 (=73, [4]) and γ * by limitations on the possible value of the vacuum energy at the time of nucleosynthesis. The other parameters, such as the magnitude of the curvature, are fixed by constraints within the model. With the model’s two parameters fixed, all the predictions of this model became real predictions. Note that the energy sum is a fixed function of time, and as it happens, the predicted present-day energy density is within a factor of 3 of the value usually assigned to dark energy. At early times, the curvature, k( t ) , was large; k( 1sec )=2× 10 17 . Its present-day value, on the other hand, k 0 k( t 0 )=1.41 is small, but because of the acceleration of the scaling in Equations (1)-(3), it reached a minimum in the recent past ( t min =4.11× 10 17 s ) and is now increasing.

This solution applies to the entirety of the post-Planck evolution of the universe and it is the largely homogeneous and isotropic vacuum that one can associate with a cosmological principle. The matter content of the universe, on the other hand, has a fractal distribution that is neither homogeneous nor isotropic on any length scale.

Starting from some arbitrary point in time, we can work backward to find the time at which this Einstein solution joins smoothly onto the Planck-era transition phase. This fixes the time of the end of the transition to be about 1040 s. Moving forward to the present day, one can see that an exponential expansion is unavoidable and that it is independent of both the energy density and the pressure of the vacuum. It also has nothing to do with a cosmological constant (dark energy) or with the matter content of the universe.

Jumping back again and then moving forward from the end of the transition, the energy density rapidly decreased in accordance to Equations (1)-(4), and at a time of about 105 s the energy density of the vacuum equaled the mass-energy of a neutron. At that point, according to this model a fraction of about 105 of the vacuum energy underwent a conversion into neutron-antineutron pairs with a very small (108) excess of neutrons [5]. Immediately afterward, during an event that lasted about 1012 s, a combination of pair annihilations, baryon charge exchange reactions, and various other reactions created both the radiation that became the cosmic microwave background (CMB) and the total matter content of the universe.

This model of structure formation can account for any sort of structure including large voids. Two important characteristics of the structures being created were that they came into existence with all or very nearly all their present-day matter content and with initial sizes many times larger than their present-day sizes; a fact that is completely at odds with accretion models in which structure masses and sizes grew over long periods.

Because the CMB was the result of the same process that created the baryonic matter, the initial anisotropic distribution of the CMB was the same as the initial distribution of the baryonic matter which means that the initial magnitudes of the CMB anisotropies were much larger than the present-day values. In [6], we showed that the subsequent fading was a result of photon diffusion. Not only did diffusion reduce the amplitude of anisotropies at their locations of origin, but it also resulted in the filling in of the CMB in void regions where little or no matter creation occurred.

Following nucleosynthesis, nothing much happened until the time of about 1016 s. The reason is that the rapid expansion of the universe before that time prevented any gravitational interaction on distance scales even as small as a galaxy [7]. Eventually, however, all the cosmic structures of the size of galaxy clusters or smaller began to undergo a gravitational collapse and concurrently all galaxies developed supermassive black holes with accretion disks, the radiation from which stabilized not only the galaxies but also galaxy clusters [8]. All galaxies and larger structures started with their final masses at the time of nucleosynthesis but they only reached their final sizes when the collapsing stabilized1. This result explains the recent finding by the James Webb telescope of fully formed massive galaxies at redshifts too large to be explained by the standard accretion model of structure formation.

In recent decades, studies of the statistical distribution of galaxies in the universe have found that the distributions are fractal rather than homogenous [9]-[11]. Independently, we reached the same conclusion through consideration of the (box) dimension of the entire collection of known comic structures. The model just described can account for such structure provided that the initial Planck-era imprint was fractal. The critical question is then; how did the vacuum in which the initial distribution of curvature fluctuations was presumably totally random, organize itself into a self-similar fractal distribution with correlations at all length scales?

In the next section, we will briefly discuss the formalism of fractals and the observations that support the idea that the universe is fractal. Following that, in Section 3, we will review the so-called sandpile model [12]. The importance of this model is that it spontaneously produces fractal distributions starting from initial Poisson distributions. The fact is, however, that the standard sandpile solutions while being fractal, do not resemble the universe. We will show in Section 5, however, that a modification of the original sandpile idea does lead to results that do look like the universe we observe. This model’s distribution of matter is in the form of a web with many small structures, “galaxies”, and a few moderate structures, “galaxy clusters”, connected by filaments. Another result is that while the “galaxies” and “galaxy clusters” are the result of “gravitational” evolution, the filaments are not. We also find that the various fractal dimensions of the model distributions are generally in agreement with the dimensions of actual cosmic structures.

Finally, although it is outside the main theme of this paper, we show the apparent transition from a fractal distribution of matter to a homogenous distribution of matter at a distance of about 150 Mpc, [10] [11] is a consequence of the finite size of the universe rather than a fundamental change in the statistics of the matter distribution in the universe.

2. Evidence of a Fractal Universe

In this section, we will review the data that supports the conclusion that the underlying structure of the universe is fractal. Some exposure to the theory of fractal geometry and its application to cosmic structures is necessary and two references we found to be useful are the books by Falconer [13] for the formal theory of fractal geometry and the one by Gabrielli, et al. [9] for the application of fractal geometry to the problem of cosmic structures. We won’t make any attempt to develop the theory here and instead will just state a few of the essential differences between the structure of the universe as viewed from the traditional and fractal viewpoints. The two viewpoints are not compatible. It is not possible, for example, to apply some sort of transformation to show that the differences are simply a matter of one’s perspective.

Both viewpoints view the initial evolution of the matter content of the universe as a stochastic process so theories of galaxy distributions are generally framed in terms of probability densities, averages, variances, and so on. Perhaps, the biggest difference between the two viewpoints concerns the average density of galaxies on larger scales. In the traditional view, the universe is assumed to be homogeneous with the implication that there is a uniform background with a well-defined average density and that fluctuations are measured relative to this background. The primary descriptors of such a field are the auto-correlation functions with the two-point correlation function being the most significant. In such a universe, one can meaningfully talk about an ensemble average ρ( r 1 )ρ( r 2 ) where the arguments are any two points in the space. In a fractal universe, on the other hand, this definition is not meaningful because in such a universe, the average density is zero. The consequence is that for any randomly selected volume, that correlation function is almost certainly zero.

Since we want descriptors that are valid no matter what the distribution happens to be, one instead uses conditional correlation functions in which the origin of the system is occupied by a member of the set of objects. Thus, we define

ρ( r ) p = ρ( 0 )ρ( r ) ρ 0 where ρ( r ) P is the average density of objects in a

shell located at a radius r from the object at the origin of coordinates. The reason this is meaningful for fractal distributions is that while most of the universe is empty, requiring the origin to be occupied picks out those small regions of the universe that are occupied.

Analysis of the matter distributions of the universe through the use of conditional correlation functions while necessary is still not sufficient because one must also avoid making the assumption that there exists an average density. It is common practice to define a conditional correlation function ξ( r ) by

ρ( r ) p = ρ 0 ( 1+ξ( r ) ) (2-1)

where the function ξ( r ) is a measure of the correlations between the object at the origin and those in a shell at a distance r from the origin. Without correlations, the conditional density is a constant equal to the average density but this is only meaningful if there is an average density so this expression is not a valid descriptor in general since it is meaningless if the matter distribution of the universe is fractal. Another point is that by integrating over all space, it is clear that the correlations must vanish at large r if there is to be a well-defined average density.

Fractal sets are characterized by a fractal dimension instead of an average density. There are many definitions of fractal dimension but for most purposes, the one known as box dimension is the most intuitive and the easiest to use [13]. A principal characteristic of fractal distributions is that they are self-similar which means that they have power-law behavior. For example, if we sort a collection of objects by size, the relationship between the count of objects of a given size and the size will be given by

n( s )= A 1 s D s (2-2)

where D s is the corresponding box dimension2. If we scale the size by some factor, the change is absorbed into the coefficient but the fractal dimension remains the same. An important consequence is that it is possible to meaningfully compare quite different physical systems because with fractal systems there is no such thing as a characteristic size or mass.

To obtain the formal definitions of the box dimension, we take logarithms of both sides of Equation (2-2) and solve for the dimension in the limit that, in this case, the size approaches zero. Thus,

D s = lim s0 log( n ) log( s ) . (2-3)

The limit is equivalent to evaluating Equation (2-2) at a single point. If we also know the mass of each object, we can substitute for the size to obtain

n( m )= A 2 m D m (2-4)

which has the same form but generally a different fractal dimension. Finally, we can combine the two to obtain

m( s )= A 3 s D s / D m (2-5)

which is again a power-law relationship. Another condition of fractals in connection with cosmology is that the fractal dimensions are always less than the dimension of the physical space, d=3 .

From Equation (2-5), we see that the fractal dimension is a measure of how fast the masses of the objects grow with an increase in their sizes and since the dimension must be less than d, we see that the mass of a fractal object grows slower that the volume occupied by the object. We can now compute the conditional density with the result that, [9],

ρ( r ) p = DB 4π r γ (2-6)

where, as a matter of convention, the exponent is commonly written in terms of the co-dimension, γ=( 3D )>0 and B is a constant. Finally, because the observed conditional density tends to vary rapidly as a result of fluctuations, it is common to consider the integrated conditional density (ICD) given by

Γ * ( r )= 3 4π r 3 ρ( r ) p d 3 r (2-7)

which has the same fractal dimension as the conditional density.

We can now make comparisons between the two types of universes. The homogeneous case is characterized by a well-defined average density and with fluctuations that vanish in the limit of large r , i.e. lim r ξ( r )0 . In the fractal case, the opposite is true. From Equation (2-6), we see that the density vanishes as r ; the average density of a fractal structure is zero. Also, see [9], with fractals, the fluctuations increase in magnitude with distance instead of vanishing.

With the formalism in hand, we will now consider results obtained from cosmic observations. In Figure 1, we show the results of the analysis of data taken from the CfA2 catalog (blue line) [11], the early results from the SDSS catalog (red line), [10], and the results obtained from a ΛCDM N-body simulation (black line) [14]3.

Figure 1. Integrated conditional densities taken from the CfA2 catalog (blue line) and the early SDSS catalog (red line) (adapted from Figure 2 of [10]) together with the results obtained from a ΛCDM simulation, [14], (adapted from Figure 1 of [11]). The dashed line is the power-law r γ where γ=1 which implies D=2 . The two horizontal red lines at the bottom of the graph indicate the size range of galaxy clusters and superclusters. Their significance is explained in the text.

Looking at these curves, we see that the observed distributions reflect a fractal distribution with a box dimension of D2 out to a distance of r40Mpc . For larger distances, the curve flattens which seems to indicate a transition from a fractal to a homogeneous universe characterized by a non-zero average density. We will show later, however, that there is a natural explanation for the flattening that does not involve such a transition.

The other significant result shown in the figure is that the ΛCDM simulation result does not look anything like the actual data. Taken together, these results are strong evidence that the fractal model of the universe is correct and that the standard ΛCDM model universe is wrong.

It is common practice to study structure distributions by applying Equation (2-1). In such work, an average density is estimated from the observation sample but, while such an estimate can always be made, in a fractal universe, such efforts are meaningless because the assumed average density, ρ 0 , does not exist. And without an average density, there is no basis for a perturbation model of structure formation.

Up to this point, we have been considering the distribution of galaxies as a function of their distance from an observer. In other words, we have focused on where they are rather than on what they are. We have ignored the fact, for example, that some of the galaxies will be contained in galaxy clusters. We will now approach the problem from the opposite viewpoint, namely, we will consider what the objects are without regard to where they are.

To proceed, we need to have some idea of the masses, sizes, and counts of the various cosmic structures. The sizes of the structures are reasonably well known but the counts and masses are, in many cases, hardly better than guesses. We obtained estimates from numerous, often inconsistent, sources on the internet so these numbers are very rough. The total number of galaxies, which we take to include both large and dwarf galaxies, is thought to be in the range 1011 - 2.5 × 1012. We then assume that the ratio of dwarf galaxies to large galaxies is in the range of 50 - 500. Combining, we get estimates of the range of counts for both large and dwarf galaxies. Spirals make up somewhere between 60% and 75% of the large galaxies. Large ellipticals occur primarily inside galaxy clusters so we did not include them as independent structures since we do include the galaxy clusters. For the globular clusters, we estimate the range to be 200 - 800 per spiral galaxy. Finally, the total number of superclusters is thought to be in the range 106 - 107 with an average of 10 galaxy clusters per supercluster so knowledge of the masses of the galaxy clusters gives us an estimate of the masses of the superclusters.

In Figure 2, we show a graph of our estimates of the count of various structures versus their present-day sizes. In what follows, we scale the sizes and masses by those of an average supercluster, ( s sc 150Mpc , m sc 1.5× 10 16 M ). Remember that doing so does not affect the fractal dimensions.

We find that all the structures lie on a straight line indicating a fractal origin.

Figure 2. Estimated structure counts versus their present-day sizes. The term extreme structures refers to the large warm and cold regions in the full sky CMB anisotropy map with sizes on the order of 45 degrees. The blue line is explained in the text.

Notice that shifting any of the estimates by a factor of 10 or even more will have little effect on the slope of the red line. With the change of scale, Equation (2-2) for the red line becomes

n( s )=5.7× 10 6 ( s s sc ) D s (2-8)

with D s =0.9 . The blue line gives the count of structures of size, s, that would fill all space.

n fill ( s )= ( s/ a 0 ) 3 . (2-9)

This curve places an important limit on the distribution of structures as we will discuss shortly.

Because our interest is in the fractal nature of the vacuum imprint that regulated the formation of the structures, we need to be concerned with the structure size distribution at the time of nucleosynthesis rather than with the present-day distribution. Aside from the long dimensions of superclusters which are too great to undergo any gravitational-induced change with time, all structures including the radii of superclusters underwent gravitational compaction during their evolution. By following their evolution [7], we determined that galaxy clusters, galaxies, and dwarf galaxies had to have been on average initially 7, 55, and 480 times larger at the time of nucleosynthesis than they are today (in present-day terms.) After making the adjustments, we obtain Figure 3 (we have removed the stars and globular clusters.)

Figure 3. Estimated structure counts versus their sizes at the time of nucleosynthesis.

In this case, the box dimension is about D s =1.6 . The structures again lie on a straight line indicating a fractal relationship. Using estimates of the masses of the structures, we immediately obtain Figure 4 which shows the counts as a function of the masses. This time the box dimension is D m 0.56 . Finally, by combining the two results, we obtain the mass versus size result which we show in Figure 5.

Figure 4. Structure counts versus their mases.

Figure 5. Structure masses versus their sizes. The box dimension is D s / D m 2.86 . For comparison, we show the calculated result with a dimension of D=2 .

We now wish to compare with the earlier results. As we already noted, the curves of Figure 1 look at where things are without regard to what they are whereas the later figures are concerned with what they are without regard to where they are. In both cases, the conclusion is that the structure of the universe is fractal. The latter study covers a range of dimensions about 2 orders of magnitude larger than the first and even though they are looking at different aspects of structure, the box dimensions found for the mass of a distribution of objects as a function of the size of the containing volume ( D=2 ) is not wildly different from the dimension found for the mass of particular structure types as a function of their size ( D2.86 .) These two thus complement each other which strengthens that conclusion that both cosmic structures and their distributions are fractal.

Returning now to Figure 1 and the flattening of the conditional density at a distance of about 100 Mpc, those in the standard model camp would take this as evidence that on the largest scales, the universe is homogeneous. We, on the other hand, assert the contrary. Referring to Figure 2 and Figure 3, notice that superclusters lie at the intersection of the red (fractal) and blue (space filling) lines which means that they lie at a transition between structures small enough that they can exhibit a fractal distribution and the larger structures that cannot. The counts or masses of structures or collections of structures larger than superclusters must lie on the blue line instead of the red line, i.e. their “fractal” dimension becomes 3 instead of 2 or 2.86. One can now appreciate the significance of the galaxy cluster and supercluster size range indicators shown in Figure 1. The galaxy clusters are small enough that they are well within the range of sizes that can fully express their fractal dimension. Superclusters, on the other hand, have sizes that coincide with the space-filling limit where we see the flattening of the distributions. The slope of the fractal curves is the co-dimension, γ=3D . At the small end of the supercluster size range, the slope is γ=1 so D=2 . At the upper end of the size range, the slope is γ=0 so D=3 . The flattening of the distribution to what appears to be a homogeneous universe is simply an artifact of the finite size of the universe rather than of a fundamental change in the statistical properties of the universe. There simply isn’t enough room to accommodate the full count of any structures with sizes equal to or greater than the size of superclusters.

The purpose of this section was to show that the distribution of matter both in terms of how that matter is distributed in space and into what sort of structures it has formed is far more likely to be fractal than homogeneous. In the next section, we will discuss the sandpile model which was the first model developed that exhibits spontaneous development of fractal structure.

3. Sandpile Models

In this section, we will review the sandpile model and present a few examples that exhibit spontaneous development of fractal distributions from decidedly non-fractal starting points. Bak, Tang, and Wiesenfeld, [12] (hereafter BTW) introduced the idea. Since then, numerous others have extended the idea in various ways but we will stick fairly close to the original model.

The basic idea is that one starts with a grid of “sandpiles” each containing either a predetermined or random initial number of grains with at least one of the sandpiles having a grain count that exceeds a predefined spill level. In the case of actual sandpiles, the spill condition could be specified in terms of the slope of the sandpile, and adding grains to the pile will eventually cause a slide to occur. When a spill occurs, a common rule is that it transfers equal numbers of grains to each of its nearest neighbors and retains any remainder. For example, in 2-dimensions each cell has 4 nearest neighbors so if a spilling cell has 23 grains, each of the neighbors would get 5 grains and the spilling cell would retain 3. The standard spill model does not consider the pre-spill values of the receiving cells so a spilling cell can transfer grains to a neighboring cell with a larger grain count than its own. This latter is not realistic for actual sandpiles but does make sense for cells of a vacuum emitting gravitational waves. As a general rule, for a system to reach equilibrium, either the initial number of grains must total less than 4 times the number of cells or there must be some mechanism by which the system can lose grains. In the original versions of the model, spilling cells along an edge or at a corner of the grid transfer some of their grains to phantom cells lying outside the grid. Using the previous example, if the spilling cell were on an edge, 3 cells would each receive 5 grains, the spilling cell would retain 3 grains, and the remaining 5 grains would be lost.

To start a simulation run, one first establishes some initial distribution and then enters a loop searching the grid for cells exceeding the spill value and causes them to spill. Often the action of a spill will stop at the nearest neighbor but in other cases, the wave will travel all the way across the grid. One keeps repeating the cycle of finding and spilling cells until all cells have a value less than the spill cutoff at which point the evolution stops.

Various search rules can be imagined but as shown by Dhar, [15], an interesting fact is that for the class of sandpiles in which the spill criterion depends on just the cell value, for any given initial state, the final state of the system is entirely independent of the order in which the cells are spilled. In Figure 6, we will show a variation of the original BTW model in which all cells of, in this case, a 161 × 161 grid, were empty except for the center cell which was assigned a value of 2 16 =65536 . The left-hand frame shows the state at an intermediate point in the evolution and the right-hand frame shows the final state. The cell values are indicted by their color; black = 0, blue = 1, green = 2, yellow = 3, and red ≥4.

Figure 6. Example of a sandpile simulation in which the cells are all initially empty except for the center cell which was assigned a value equal to about 2 1/2 times the total number of cells.

At each simulation step, we first chose cells at random until either one was found that could spill or the number of tries exceeded 5. If no spillable cell was found, we then searched systematically from the center working outwards. The result is the scattering of red cells in the left-hand frame that are the result of the random searches and the circle of mostly stable cells centered at the origin. Note that the systematic sweep does not prevent red cells from reoccurring inside the circle. These are the result of a spill track progressing back towards the center from the outer limit of the systematic sweep. Because of this, we have to continually sweep outward from the center to be sure of spilling all red cells. Although it will be important later, there is nothing special about the sweeping out from the center. We could just have well done a systematic search by row and column. The intermediate stages would have been different but the final result would have been the same. We can see structure developing in the outer regions at the intermediate stage but these are noticeably different from the structures of the final result. One can now appreciate how amazing it is that the final pattern is exactly the same no matter what rule, random or otherwise is used to find the intermediate spill cells.

It is also interesting to look at the result with a slightly different initial center value. In Figure 7, we show the final result for a center value of 2 16 +4=65540 . We see that the pattern is the same except for a small difference at the center of the gird.

Figure 7. Same as in Figure 6 except for a small change in the center cell initial value.

The model just presented is a variation of the original BTW sandpile model which has no direct bearing on cosmology. Its purpose is to show how a simple rule can turn a single non-empty cell into a complex fractal pattern.

We will now consider the original BTW model which consists of three stages. During the first stage, the sandpile evolution follows the same rules as in the previous example but with a different starting point. Instead of starting with a single nonempty cell at the center of the grid, every cell in the grid is assigned a randomly chosen value lying between some minimum and maximum limits with the minimum much larger than the spill level (=4). Since the grain count is much larger than the cell count, the model must lose grains to reach equilibrium. In this case, it does so by losing grains at the edges of the grid. The simulation is started and allowed to run to completion. A typical final state resulting from a grid of initial random cell values ranging from 10 to 15 is shown in Figure 8. It turns out that the general appearance of the final state is largely independent of the initial value range limits. Setting the range from 10 to 75, for example, yields essentially the same result.

Figure 8. Stable result of the evolution of a system in which each cell was initialized with a random value in the range 10 to 15.

Once the simulation has stopped, we begin the second stage in which we expose the fractal nature of the structures hidden in this pattern. Just by looking at it, it’s not obvious that there is any structure in the grid of Figure 8 but in fact, structures exist with masses (cell counts) ranging from a minimum of 3 (which can occur only at the corners of the grid) up to a maximum which in this case, is 18,500. The BTW idea is that each minimally stable cell (the yellow ones) is a member of one of more structures that can be probed by tickling that cell. Adding 1 grain to each critical cell in turn sets off a wave that might only travel to the nearest neighbor but can in other cases, travel across the grid. The sizes and masses of the structures are thus discovered dynamically4.

In Figure 9, we show all the structures with mass = 5 on the left, the single largest structure in the middle, and the structures with masses ranging from 100 to 500 on the right. The total count of the smaller structures is 1241. From the figure, it is apparent that there is considerable overlapping of the structures but this is exactly what one expects with a fractal distribution since the probability of finding a second galaxy near a randomly chosen galaxy is highest near the chosen galaxy. Notice also that there is a considerable amount of space that does not contain any of these structures. At the other extreme, the largest structure nearly fills the grid. Its mass is 18,500 and its size is 212. Finally, on the right, the total count of intermediate structures is 119 so again we find a considerable amount of overlap along with a significant amount of empty space.

Figure 9. The grid of Figure 8 shows all the structures with a mass = 5 on the left of which there are a total of 1241, the single largest structure in the middle which has a mass of 18,500, and intermediate structures with sizes ranging from 100 to 500 on the right. Their count is 119.

This example clearly illustrates the hierarchal structure of the results with the larger structures containing somewhat smaller structures which in turn contain still smaller structures much as superclusters contain galaxy clusters which contain galaxies and so on.

We now want to discuss an important point that is not mentioned in BTW but must be considered by anyone interested in developing sandpile simulations. In stage 3, we will perform a fractal analysis of the structures which will depend on the counts of the structures. The issue is that the raw structure counts found during the cell-by-cell search will generally be in error because a given structure can result from the spilling of multiple member cells and this overcounting occurs for essentially all the structures with masses larger than 5. For example, the single largest structure of Figure 9 results from the spilling of any of 25 different member cells (out of the total member count of 18,500.) This means that if we consider the result of spilling each yellow cell to be unique, we are overcounting by some factor. To keep track of the structures in an efficient manner, we developed a catalog system based on hash numbers that is described in the Appendix. Using this system, we are able to efficiently identify and remove duplicate entries.

The third stage of the analysis involves extracting the fractal parameters from the catalog of structures. While the sandpile simulations we have been discussing are not directly related to the cosmology problem, their analysis allows us to set up the machinery we will need.

4. Fractal Analysis of Sandpiles

Our goal is to extract the fractal parameters that characterize the sandpile distributions. As noted earlier, there are two distinct viewpoints. In one, we concern ourselves with the masses and sizes of the structures without regard to where the structures are located. The other approach is concerned with the locations of the smaller structures without regard to what they might be a part of, e.g. one measures the conditional locations of galaxies without regard to whether or not a particular galaxy happens to be a member of a galaxy cluster.

We will first deal with the duplicate count issue to get that out of the way. Using the same simulation run as for Figure 8 and Figure 9, we obtain the count versus mass results shown in Figure 10. In the upper panel, the duplicates are included and in the lower, they are not.

Figure 10. Count vs mass. The upper panel includes duplicates and the lower does not. Note that the curve in the lower panel does not terminate at a mass of about 60 but instead extends out to a mass of 18,500 with count = 1.

One can see that duplicates have an increasingly larger effect on the counts with increasing mass. One consequence in particular is that with the inclusion of the duplicates, one might suppose that the sandpile distribution approaches a homogeneous distribution in parallel with the results shown in Figure 1 which, of course, is not the case. Henceforth, all the results presented will have had duplicates removed.

To compare with the counts of actual structures of different sizes, we need to first group the raw data into bins corresponding to various structure types. Superclusters, for example, are not all of one size but instead occur with a range of sizes. If we went to the extreme limit of listing all structures with their exact sizes, the count of each type would always be 1 so the graph shown in the lower panel of Figure 10 would be just a flat line with each entry having a count of 1. What we want instead is to lump all superclusters together, all galaxy clusters together, and so on. To accomplish this, we split the range of sizes or masses into a series of bins with each bin corresponding to a particular structure type. The count of each bin is then the total count of all the simulation structures that have sizes or masses that fall within the limits of that bin. To get an estimate of the bin size we should use, we looked at the size range of superclusters given in Figure 2 and divided the average size by this spread. The result indicates that we should allocate about 26 bins which we rounded up to 30. We ran a few cases to check the sensitivity of the results to the bin count and it does make a difference but not by enough to affect our general conclusions. In the mass versus distance case, we also use a set of bins but in that case, we are not concerned about grouping structures of a given type so the number of bins was determined by finding the minimal number that yielded consistent numerical results.

Each simulation run results in a list of structures indexed by a hash number. Each entry contains the structure’s mass, size, and one or more lists of its member cells with the number of lists greater than one giving the duplication count. We next distribute the structures into the set of bins just discussed. Finally, we perform a least-squares fit in the various cases to determine the corresponding fractal dimensions. In what follows, we show the results for an ensemble average of 5 individual runs each of which had an initial random cell value range of 10 to 15.

In Figure 11, we show the ensemble average (red) together with the least-squares fractal curve (green) for the counts versus mass. We also show the ensemble-averaged data, of which the data of the lower panel of Figure 10 is a part, along with the fractal curve obtained by BTW (dashed-black) which has a dimension D BTW =0.98 . The latter more closely matches the raw data than the binned data. With the types grouped into bins, the fractal dimension is D m =0.42 .

Figure 11. Graph of count versus mass. Calculated fractal curve (green) with dimension D m =0.42 together with the ensemble average of the bins (red.) Also shown is the ensemble average of the raw data together with the BTW result (black) which has a fractal dimension of D BTW =0.98 .

We will now just run through the remaining cases. In Figure 12, we show count versus size which has a fractal dimension D s =0.86 .

Figure 12. Count versus size. The fractal dimension is D s =0.86 .

The mass versus size result is shown in Figure 13. In this case, the fractal dimension is D ms =1.80 . The ratio D s / D m of Equation (2-5) has a value of 2.0 which is a bit larger than the actual value. The difference is because we are considering an ensemble average instead of individual runs.

Figure 13. Mass versus size. The fractal dimension is D ms =1.80 .

We next turn to the alternate viewpoint in which we consider the location of structures rather than their individual characteristics. We limit the choice of structures because we don’t want to include the composite structures explicitly. The structures with mass = 5 account for almost 1/2 of the total number and if we include all structures with masses ≤10, the count is almost 70% of the total number. We next need to set a limit on the range of allowed structures to avoid edge effects. Since the ICD involves integrating over all the other structures from each subject structure, we limit the distance to 40 since our chosen grid half-size is 80. The procedure is to pick every structure (from our restricted mass list) lying within a distance of 40 from the origin, integrate out to a distance of again 40 from each such structure, and then average the results. The integral itself becomes a sum over a series of concentric ring-shaped bins with each target structure being assigned to a ring based on its distance from the subject structure. The number of bins was set by increasing the count until the change did not have a significant effect on the results. The ensemble average result is shown in Figure 14. The slope of the fractal curve is γ=0.85 which in 2-dimensions implies a fractal dimension of D=2γ=1.15 . (In 3 dimensions, the result would be D=2.15 .)

Figure 14. Mass versus distance. The ensemble-average ICD with co-dimension of γ=0.85 .

Comparing these results with the results of Section 2, we see similarities in the curves and that the actual fractal dimensions are different although not by huge amounts. The real problem with this version of the sandpile model is that it generates a hierarchy of blobs within blobs whereas the actual universe has a web distribution. It does, however, exhibit the all-important characteristic of creating a fractal reality out of a Poisson beginning.

In the next section, we will introduce a modified version of this model that does result in stable solutions that resemble the actual universe. Before doing that, however, we will present a result that is a transition step from the existing model to the new one. The reason for doing this is to show that the changes we now list do not affect the results. First, we initialize the evolution from more than one location. In the example we will show, we start with 4 centers. The next change is to eliminate the random portion of the next spill cell search. The third change is to wrap around to the opposite border instead of passing grains to phantom outside cells which means that no grains are lost. This prevents the simulation from reaching a stable state so we eventually have to manually terminate the simulation. The final change is to define the cell values in terms of floating point numbers instead of integer values. The spill rule remains the same with the spill value still an integer. In Figure 15, we show the simulation not long after it began and then later when the expanding solution has nearly filled the grid.

Figure 15. Modified sandpile model with 4 centers and systematic spill cell search only.

The red cells are those with values equal to or greater than the spill value of 4. Those outside the circular regions are background cells that have not yet spilled. The dark red cells at the boundary of the circular regions are those with values greater than the maximum initial value which in this case is 15. The point here is to establish that the model changes just introduced do not change the outcome in any significant way. Aside from the fact that the yellow, etc. cells don’t yet fill the grid, the result looks the same as the distribution of Figure 8.

5. Sandpiles in an Expanding Universe

We now finally get to cosmology. Our goal is to explain the origin of the fractal imprint that accounts for our fractal universe. Instead of a grid of sandpiles, we think in terms of a vacuum containing small cells of indeterminate size with random values of energy density expressed in terms of its curvature. Instead of sandpiles spilling grains, we imagine cells with large energy density emitting gravitational radiation which travels outward causing other cells to emit additional gravitational radiation. Some of this additional energy would augment the initial wave and some would initiate waves that travel back toward the original cell.

The universe was extremely simple at its beginning. The sandpile model with cells initialized with random values of energy provides a reasonable proxy for the vacuum. The only other property experienced by the vacuum during the Planck era was its exponential expansion and that is something that is not part of the sandpile model. To incorporate the expansion, we assume that its effect is to reduce the vacuum energy density. To add this effect to the simulation, we periodically multiply all the cell energies by a reduction factor slightly smaller than 1 as the simulation proceeds. This mechanism provides the “grain” losses needed to bring the simulation to a stop but far more importantly, it radically changes the evolution. It turns out that the results are very sensitive to the details of the spill process but we will postpone that discussion until after we have reviewed some results. Aside from the expansion, the parameters of the model are the same as those of Figure 15.

A simulation result including the expansion is shown in Figure 16. The result looks nothing like the result shown in Figure 15, but instead looks like the actual universe.

Figure 16. Sandpile simulation including expansion.

There is a web structure, which we can associate with superclusters, surrounding large voids and there are a few extra-large structures that we might associate with the extreme structures seen in the CMB anisotropy spectrum.

Each of the yellow cells is associated with a structure that results from the spilling of that cell just as was the case in the previous section. In Figure 17, we show the smaller structures on the left and the larger ones on the right. We see that, while filaments consist of strings of evolutionary (spilling) structures, they themselves are not a consequence of the spill process which is again reminiscent of superclusters.

Figure 17. Smaller spill structures on the left and larger structures on the right.

In Figure 18 and Figure 19, we show the intermediate states of an example simulation. The state of the system after 5 minutes of execution time is shown on the left in Figure 18. This is essentially the same as that of Figure 15. The cells with the highest energies are colored dark red and these mark the boundaries of the expanding waves. The maximum cell value at this stage was 144 which we compare with the largest initial cell value of 15. The state of the system after 10 minutes is shown on the right. By this time, the expansion has begun to drop some of the background cells below the spill level and large areas of low-valued cells are appearing inside the circles. The red arc inside the upper-left circle is an example of a wave emitted at the expanding wave boundary that is traveling back toward the center. At this point, the maximum cell value was 86.

Figure 18. Simulation intermediate states after 5 and 10 minutes of execution time.

In Figure 19, we show the results at execution times of 15 and 20 minutes. We now see the development of filaments in all four circles. These filaments are not the final ones, however, because filaments are continuously being formed and destroyed as the simulation progresses. The maximum cell value is now 56. At 15 minutes, the circles were beginning to coalesce which resulted in a rapid jump in the maximum cell value. By a simulation time of 16 minutes, the maximum cell value has increased to 170 but by 20 minutes, the value has dropped to 142 with the state shown on the right.

Eventually, after 26 minutes, the simulation stops with a final state similar to that of Figure 16.

The next step is to perform a fractal analysis of the final distribution. Before we could do that, however, we needed to catalog the filaments. Because the filaments are not a result of the critical cell spill process, we needed to develop an alternate method for identifying them. They are obvious to the eye but it is not so easy to identify them using code. We didn’t try to solve the coding problem completely but instead, created code that makes an initial stab at tracing the filaments after which we used additional code to manually massage the result into the final state.

Figure 19. Simulation intermediate states after 15 minutes on the left and 20 minutes on the right.

this involved removing some possible filaments, merging or splitting others, adding or removing individual cells, and so on. In the case of the simulation run of Figure 16, we ended up with the 56 filaments shown in Figure 20.

Figure 20. The final 56 filaments of the simulation of Figure 16.

In Figure 21, we show a random selection of 15 of those filaments on the left which gives some idea of our definition of a filament. On the right, we show the same set of 15 filaments with their member structures expanded.

Figure 21. A random selection of 15 filaments out of a total of 56. On the right, we show the member spill structures expanded.

We made one more change before doing the fractal analysis. In the previous section, the mass of each (2-dimensional) structure was defined to be the count of cells in that structure. We are still working in 2 dimensions but, considering that the structures are compact objects, we can get a more realistic result if we define the masses in terms of the volume of each structure instead of its area. Thus we define the 3D mass to be m struct = N cells 3/2 where N cells is the count of cells in the structure. For the filaments, we define their size to be their total length and their mass to be the sum of the masses of its member structures.

We will now examine the fractal properties of these results and compare them with the results of Section 2 keeping in mind that the simulation is in 2 dimensions rather than 3. The results shown next are ensemble averages of 6 simulation runs. In Figure 22, we show the distribution of count versus mass. The green line is the least-square fit which has a fractal dimension of 0.96. Also shown in black is the actual cosmic structure fractal curve discussed in Section 2 which has a dimension of 0.56.

Figure 22. Ensemble average count versus mass. The green line is the least-squares fit to the simulation results and the black line is that from actual cosmic structures discussed in Section 2.

In Figure 23, we show count versus size. In this case, the simulated dimension is 1.18 and the actual data value is 1.6.

Figure 23. Ensemble average count versus size.

The mass versus size result is shown in Figure 24. The simulated dimension is 2.35 and the data dimension is 2.86. In this case, we limited the least-squares fit to the leading slope of the curve which gives a good match to the actual data.

A problem with this model is that the filaments, which make up a large fraction of the objects with larger sizes, have simulated masses that are too small. The result is the difference between the actual and simulated curves in both Figure 22 and Figure 24 with the black lines being somewhat flatter than the simulation results. Increasing the filament masses would shift the large mass end of the curve to the right thus flattening the slope.

Figure 24. Ensemble average mass versus size.

Finally, we show the mass versus distance result in Figure 25. In this case, the co-dimension of 0.85 is essentially the same as the value of 1 obtained from the galaxy observations.

Figure 25. Ensemble average ICD mass versus distance.

Summing up, we have found that by embedding the sandpile idea in an expansion universe, we obtain a self-organizing system that evolves from a random beginning into a fractal structure that not only looks like the actual universe but has fractal properties that are in reasonable agreement with the observed properties of actual cosmic structures.

6. Discussion and Conclusions

In the previous section, we focused on the results from our modified sandpile model. We will now discuss in more detail certain aspects of the model that we only touched on previously. First is the expansion model. We used a constant reduction factor applied periodically. That is, all energies were multiplied by a reduction factor R after every so many seconds of execution time. Ignoring other energy transfers, the energy of a cell would be e n = e 0 R n after n reductions. In the continuous limit, this becomes an exponential reduction which mimics an exponential expansion during the Planck-era inflation. To determine R, we required that the accumulated reduction would reach a value of, say, R n =0.06 by the time that the simulation stopped. For an execution time of 26 minutes with the reduction applied 6 times per minute, the resulting factor would be R=0.982 . After a few trials, we found that the results are not particularly sensitive to the exact value of the reduction.

A more sensitive issue is the spill model. In the original sandpile model, the cell values were integers, and the amounts transferred to the nearest neighbors and the amount retained were also integers. When expansion is included, the cells can no longer have integer values because of the non-integer reduction factor. At least initially, we wanted to stay as close to the original model as possible so for our first try, we retained the rule that the value transferred to the 4 nearest neighbors was an integer value, and instead of using the exact non-integer remainder, we rounded the calculated remainder of the spilling cell to the nearest integer. This method worked and is the method used to generate the simulation results shown in the previous section.

The rounding of the remainder does, however, introduce an annoying error because in the cases in which the rounding is upward, for example, the spilling cell would appear to gain a small amount of energy. If the spilling cell has an initial value of 22.85, a transfer of 5 to each of the four nearest neighbors would leave a remainder of 2.85 which would get rounded up to a value of 3. Rounding down also occurs but the rate is about 100 times smaller. In a typical simulation run, there are about 12 million spills of which about 8% result in a rounding up of the remainder. The average ratio of the rounding error to the cell’s pre-spill value was about 1.4% and the accumulated rounding per spill is about 0.006. Even though these numbers seem to indicate that rounding is of no importance, we decided to try eliminating the energy discrepancy by retaining the actual non-integer value of the remainder. The result was that the web structure didn’t happen. We tried a few other combinations of non-integer values for the transfer value and remainder but no matter how we went about it, the changes destroyed the web structure of the result. We then got more systematic and either added or subtracted a small amount to or from the rounded-up integer remainder. What we discovered is that subtracting even as little as 0.01 from the integer value destroys the web structure but as much as 0.4 can be added to the integer value and the web structure still results. Using our earlier example, rounding 2.85 to any value between 3 and 3.4 works but rounding 2.85 to 2.99 does not work.

The implied lack of energy conservation is easily dismissed because during the Planck era, time, distance, and energy were uncertain so strict energy conservation would not have had any meaning. Nevertheless, one would like to understand why this happens. In the future, various extensions of the model will be tried. First will just be variations of the spill model to better understand the rounding up problem. Another issue is that sandpile models are sequential whereas in the actual universe, cells would be spilling and receiving energy simultaneously, to the extent that one can talk about simultaneous. We also need to move to 3 dimensions. We have put some effort into the latter but have not yet succeeded in achieving a web structure result. What adds to the difficulty is that going to 3 dimensions results in a huge increase in the model execution times. Even with the side dimension reduced to 2 × 50 + 1, there are more than 1 million cells and a single simulation run jumps from less than 1/2 hour to about 1 1/2 days. The problem is not the increase in the number of cells per se but the huge increase in the number of propagating waves radiating from each of those cells.

We still have far to go but the class of model we have been describing is of great importance because it allows one to talk and make predictions about the beginnings of the universe without needing a precise idea of time and distance. The uncertainty of time and distance is essential for the formation of structures as large as superclusters while at the same time, it means that no model based on a differential manifold is going to work. This model presumes some notion of cause and effect but nothing more. A defining feature of fractal structures is that they do not possess any characteristic scale in either time or distance and since time and distance were uncertain, neither did the initial universe. That being the case, perhaps we can turn the argument around to say that the only structures possible in the initial universe had to have been fractal because any other form would have required the existence of characteristic scales.

The model we have presented is not the final answer but it is certainly the only model now in existence that spontaneously generates a fractal web structure from a Poisson beginning.

Appendix

In this appendix, we will describe the method we used to catalog the structures and to identify duplicates. Each cell in the grid is specified in terms of its row and column. The cell at the center has coordinates (0, 0) with both i row and i col ranging from N 1/2 to N 1/2 . The full length of each size of the grid is then l side =2 N 1/2 +1 . To improve the efficiency of sorts and searches, we also assigned to each cell, an index given by

inde x cell =( i row + N 1/2 ) l side +( i col + N 1/2 )

which runs from 0 up to one less than the number of cells. During each run, we sort the cell list of each discovered structure by cell index and then create a hash using the formula,

hash=17 Fori=0to N cells 1 hash=( hash19+inde x i )mod N prime Next

where N cells is the number of cells in the structure and N prime =21262211 is a prime number close to Int32.MaxValue/101 where Int32.MaxValue is the maximum possible Int32 in the Windows operating system. This hash works quite well and we found only 2 or 3 collisions in many thousands of structures. We then created a list of structure data objects that contain the hash number, the mass and size of the corresponding structure, and a list of all the cells for every structure that had the same hash Because the hash is deterministic, identical structures will always return the same hash so duplicate entries will appear under the same hash. Note too that the hash depends on the exact location of each member cell so otherwise identical structures located at different positions in the grid are considered to be different. Finally, to check that the multiple entries are indeed duplicates, we compare their cell index lists which is straightforward because the lists have been sorted by cell index. If the lists are identical, the structures are the same. If they are different, then there is a hash collision but that circumstance almost never happens.

NOTES

1Note that such a collapse results in compaction rather than accretion. The former results from a fixed amount of material getting smaller in volume whereas accretion implies an accumulation over time of material from outside the original structure.

2We will use capital D to denote fractal dimension and a lower case d to denote the spatial dimension following the convention used in [8]. We also arrange the formulas so that the dimensions are always positive numbers.

3A comparison of Figure 2 from [10] and 1 from [11] shows a small difference in normalization. We have taken the SDSS data from [10] which does not exactly match the large distance value of the ΛCDM curve. In [11], on the other hand, the SDSS curve does match the ΛCDM curve at large distances.

4In BTW, the authors refer to the count of cells in a structure as its “size”. We instead use the term “mass” for the count and use the term “size” to denote the largest distance between any two cells of the structure. In set theory, the latter is known as the “diameter” of the set.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Botke, J.C. (2023) Cosmology with Time-Varying Curvature—A Summary. In: Mosquera Cuesta, H.J. and Yeap, K.H., Eds., CosmologyThe Past, Present and Future of the Universe, IntechOpen.
https://www.intechopen.com/online-first/1167416
[2] Botke, J.C. (2023) Dark Matter, Dark Energy, and Occam’s Razor. Journal of Modern Physics, 14, 1641-1661.
https://doi.org/10.4236/jmp.2023.1412096
[3] Botke, J.C. (2024) The Reality of Baryonic Acoustic Oscillations. Journal of Modern Physics, 15, 377-402.
https://doi.org/10.4236/jmp.2024.153016
[4] Botke, J.C. (2023) The Origin of Cosmic Structures Part 5—Resolution of the Hubble Tension Problem. Journal of High Energy Physics, Gravitation and Cosmology, 9, 60-82.
https://doi.org/10.4236/jhepgc.2023.91007
[5] Botke, J.C. (2022) The Origin of Cosmic Structures Part 4—Nucleosynthesis. Journal of High Energy Physics, Gravitation and Cosmology, 8, 768-799.
https://doi.org/10.4236/jhepgc.2022.83053
[6] Botke, J.C. (2024) The Origin of Cosmic Structures Part 6: CMB Anisotropy. Journal of High Energy Physics, Gravitation and Cosmology, 10, 257-276.
https://doi.org/10.4236/jhepgc.2024.101020
[7] Botke, J.C. (2021) The Origin of Cosmic Structures Part 1—Stars to Superclusters. Journal of High Energy Physics, Gravitation and Cosmology, 7, 1373-1409.
https://doi.org/10.4236/jhepgc.2021.74085
[8] Botke, J.C. (2022) The Origin of Cosmic Structures Part 3—Supermassive Black Holes and Galaxy Cluster Evolution. Journal of High Energy Physics, Gravitation and Cosmology, 8, 345-371.
https://doi.org/10.4236/jhepgc.2022.82028
[9] Gabrielli, A., Sylos Labini, F., Joyce, M. and Pietronero, L. (2005) Statistical Physics for Cosmic Structures. Springer.
[10] Sylos Labini, F. and Pietronero, L. (2008) Statistical Physics for Cosmic Structures. The European Physical Journal B, 64, 615-623.
https://doi.org/10.1140/epjb/e2008-00002-8
[11] Joyce, M., Sylos Labini, F., Gabrielli, A., Montuori, M. and Pietronero, L. (2005) Basic Properties of Galaxy Clustering in the Light of Recent Results from the Sloan Digital Sky Survey. Astronomy & Astrophysics, 443, 11-16.
https://doi.org/10.1051/0004-6361:20053658
[12] Bak, P., Tang, C. and Wiesenfeld, K. (1987) Self-Organized Criticality: An Explanation of the 1/f Noise. Physical Review Letters, 59, 381-384.
https://doi.org/10.1103/physrevlett.59.381
[13] Falconer, K. (2014) Fractal Geometry, Mathematical Foundations and Applications. 3rd Edition, John Wiley and Sons, Ltd.
[14] Jenkins, A., Frenk, C.S., Pearce, F.R., et al. (1998) Evolution of Structure in Cold Dark Matter Universes. The Astrophysical Journal, 499, 20-40.
https://iopscience.iop.org/article/10.1086/305615/pdf
[15] Dhar, D. (1990) Self-Organized Critical State of Sandpile Automaton Models. Physical Review Letters, 64, 1613-1616.
https://doi.org/10.1103/physrevlett.64.1613

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.