Intrinsic Noise Monitoring of Complex Systems

Abstract

The power-density function of the noise spectrum of open and complex systems changes by the power of frequency. We show that the fluctuation origin and the noise-powered description are equivalent to describe the colored noise power density. Based on this, we introduce a scale-independent invariant for monitoring the dynamics of the complex system. The monitoring of the noise spectrum of the system specifies the forecast of failure, the timing of desired regular corrections and/or the assessed operation life of the system, indicating the possible faults before it happens, predicting deterioration like wear/tear, fatigue in the still properly working system. These considerations are highly applicable to living systems and their preventive care.

Share and Cite:

Szasz, O. , Szigeti, G. and Szasz, A. (2017) Intrinsic Noise Monitoring of Complex Systems. Open Journal of Biophysics, 7, 197-215. doi: 10.4236/ojbiphy.2017.74015.

1. Introduction

Testing the proper functioning of complex systems during their long-term service as they age and undergo wear and tear is of vital importance with respect to preventive maintenance and operation life. The problem is rather complex and complicated, as we must make conclusions about the properties of a given system from the results of its test: we must specify the characteristic values of the type in question by using the operation data of several single parts. This problem can be practically traced back to a single root: the systems used are open and connected to their environment through a number of interactions; therefore, they cannot be considered as closed, even for the duration of a measurement. They are definitely open from an energetic point of view, (energy exchange with the environment). Interactions indispensable to the operation (on which the effect of the system is directed, retroactive effects); influences of environment (environmental loads, e.g. temperature, contamination, pressure, rain etc.), as well as the effect of the user’s habits and conditions (e.g. early morning usage, usual usage order, effects of usual intensity, direction etc.) all affect the ageing of the actual system. A permanent control and maintenance service is needed to keep the complex function active. We will show that the characteristic values of energy input (feeding and take out), as well as the non-deprivable substantial characteristics, can be used to check the general process.

Measurements of any dynamic effects are always noisy. The desired signal (electrical, mechanical, etc.) and the measured one differ. The measurement clarity is characterised in these cases by the signal/noise ratio. Dynamic effects and changes could be noise-free only in the case of very simple and reversible cases (in energetically closed systems). This is practically a theoretical idealization, because in reality, noise is always present as the random or systematic fluctuation of the given signal (measured, set, used etc.) [1] [2] .

The noise source is composed of many-sided interactions; the continuous energy and entropy/information exchange of open dynamic systems, the mutual dependence of the single subsystems and the actual noise spectrum are formed in a synergetic way [3] . Consequently, the desired effect is accompanied in every real case by the noise spectrum composed of the specific features of the dynamic systems. Thus, the noise is a certain degree of appearance of parameters, processes, dynamic behaviour etc. always arising, but not directly involved in the given examination.

In the course of the usual wearing tests and quality examinations, each element of the system is examined separately by using several sensors, and during this measurement, one tries to eliminate or minimize the noise. Consequently, the aim of these measurement procedures is to filter the noises and create the best possible signal-to-noise ratio in order to obtain the most exact information possible regarding the given partial system.

In the case of open, dissipative systems (basically, every occurrence realizing not spontaneous thermo-dynamical changes, e.g. heat engines, biological systems, electromagnetic radiators etc.) the reduction of noise is impossible by fixing the interactions, because the open, dissipative feature assumes the definite interaction with the environment. For this reason, in real, irreversible dynamic systems, we may consider only the second possibility, namely that we must reckon with noise anyway, and―at the most―the chosen dynamical methods may suppress the noise and bring out the „useful” signal as far as possible.

The noise, however, provides information on the interactions (inside and/or outside the system) of the examined system. In this case, the measured signal is not the useless noise, but the fluctuation properties, which could carry the systemic changes in the complex system.

Our objective is to obtain information on all the dynamics of complex systems in order that they may be used for the planning processes and qualitative examinations. Our concept is based on the recognition that the all the dynamics are included in the noise, and practically all those dynamic variables appear therein, the interactions of which have a share in the creation of the given (desired/useful) signal. Moreover, the noise spectrum gives account of the correlations within the system. Therefore, the examination can be carried out on the whole system, and the system’s operation can be analysed from its noise spectrum. All the failures arising because of wear, tear and fatigue processes (in general through stochastic changes) result in the continuous change of the noise spectrum. Therefore, the assumption of the noise spectrum allows the prediction of the wear and tear (fatigue etc.) processes.

All standard paper components have been specified for three reasons: 1) ease of use when formatting individual papers, 2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and 3) conformity of style throughout a journal paper. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow.

2. Simple Derivation and Description of Colored Noises

Although a complex system has a great exchange of information with its environment, it can be, in general, characterized by a stationary state, that is, by a state in dynamic equilibrium. Therefore, the dynamic equilibrium characterizing the appropriate operation can describe the time-dependent effect ( H ( x , t ) ) as a fluctuation around the average, that is:

H ( x , t ) = H ( x , t ) + δ ( H ( x , t ) ) (1)

where H ( x , t ) denotes the averaging, and δ ( H ( x , t ) ) is the actual deviation from the average (fluctuation). (Of course, there are also dynamic and non-equilibrium systems (e.g. explosives); however, their effects are measurable as a rough average (e.g. relative destroying effect measurable e.g. in the dynamite equivalent), which is also specified by a fluctuation around the average.) Later on, we are going to examine the time behavior of the process with a specified (fixed) x; therefore, the variable x will not be indicated hereafter.

The process is random if the variable is stochastic, and in this case, the power density function of process H is

S H ( f ) = | H ( f ) H * ( f ) | 2 Δ f (2)

where Δ f is the effective band-width of the Fourier integral, * denotes a conjugate and

H ( f ) = 0 exp ( 2 i π f t ) H ( t ) d t (3)

In other words, according to [4] Voss:

S H ( f ) = Re { 0 exp ( 2 i π f t ) C H ( t ) d t } (4)

where C H ( t ) = δ ( H ( t ) ) δ ( H ( 0 ) ) is the autocorrelation (pair-correlation) function of process H between two points of time, that is:

C H ( t ) = ( H ( t ) H ( t ) ) ( H ( t 0 ) H ( t 0 ) ) = H ( t 0 ) H ( t 0 + t ) H ( t 0 ) 2 (5)

The functions S H ( f ) and C H ( t ) are naturally not independent, as in addition to Equation (4), on the basis of the Wiener-Khintchine relationship [1] [3] the following is valid:

C H ( t ) = Re { 0 exp ( 2 i π f t ) S H ( f ) d f } (6)

If the C H ( t ) correlation function decays by the time constant t (of course, this is a requirement in the majority of real cases), namely:

C H ( t ) = exp ( t τ ) (7)

then

S H ( f ) = τ 1 + ( ω τ ) 2 [ ω = 2 π f ] (8)

If ( ω τ ) 1 (this is valid in the case of fast decay of correlation or at low frequencies), then S ( f ) is constant, the noise is independent of the frequency and we get so-called white noise.

If multiple fluctuations exist with randomly fluctuated time constants of the correlation Equation (7), and D H ( τ ) denotes its distribution for the given process H, then

S H ( ω ) = Re { 0 τ min τ max exp ( i ω t ) exp ( t τ ) D H ( τ ) d τ d t } = τ min τ max τ 1 + ( ω τ ) 2 D H ( τ ) d τ (9)

If the D H ( τ ) function is scale invariant, namely if, for example, van der [5]

D H ( τ ) d τ = d τ τ (10)

then we get so-called pink noise (Flicker noise, 1/f noise, etc.):

S H ( f ) = 1 f (11)

In the given multiple fluctuation case, the relationship Equation (9) cannot be normalized, so

0 D H ( τ ) d τ const . (12)

Therefore, a frequency cut-off shall be carried out at the high and low boundaries. We can demonstrate as well [6] that the lognormal distribution of D H ( τ ) :

D H ( τ ) = 1 π τ σ exp ( [ log ( τ τ ) ] 2 2 σ 2 ) (13)

Results in 1/f noise, as shown in Equation (11). In this case, there is no normalization problem, and the cut-off is uncertain. We may demonstrate as well [4] that the distribution of the product of random distribution variables is always lognormal, thus it results every time in Equation (11). However, if there is no dominant lognormal distribution, but the system can be described by a wide range of distributions, then

S H ( f ) = 1 f α (14)

where basically, we cannot fix the value of α because it can also depend considerably on the general parameters of the system (e.g. temperature, pressure etc.) [4] . In this way, the slope of the log-log scale representation of Equation (14), we get directly the character of the noise of process H.

α = log [ S H ( f ) ] log [ f ] (15)

The changing noise of the dynamical variables-as the spectrum characterizes the system―assumes the existence of a certain order and self-organization in the examined system. The self-organization comes into existence by the mutual determinacy and during the complex operation of partial systems building on one another, requiring the work of others and determining the dynamics of the other ones by causality. Complex systems with many-sided connections to their environment are in a non-equilibrium, non-stationary state and have a high-level hierarchical structure. The subsystems forming the structure are connected to each other in many ways through their physical and chemical processes or other information network. The amplitude of physical and chemical information signals generated by various individual subsystems, their characteristic time or other properties can change over a wide range, e.g. the simplest biological systems show a great variety of processes on the individual characteristic time scales and are connected to each other by scaling [7] [8] [9] . As we have shown, the noises carry dynamical information about the operating systems and may also give information on the wearing phases of the given structure. In general, the noise of any open dynamical systems could be scaled by the 1/fα spectrum [10] .

3. Fluctuation of Diagnostic Quantities

Every complex system can be decomposed into numerous simple subsystems, the state of which can be described by some physical parameters that characterize the subsystems. This means that the state of the whole system is known if we know the state of its every subsystem. Let us denote by X ¯ the vector made of the state parameters of subsystems, hereinafter called the micro-state of the system. In terms of diagnostics, the selectable and limited number of F i values are of measurable quantities and characteristic of the macro system, on the base of which we can judge if the functioning of the system complies with the requirements or not. These quantities are called macroscopic diagnostic state parameters, and the F ¯ vector made of them is the so-called diagnostic state vector. As the number of these vectors is significantly fewer than the number of state parameters serving for the description of the micro-state, from a microscopic point of view, the system described by applying the diagnostic state parameters is not complete.

Let us assume that the functions of diagnostic state parameters can be described as a function of the micro-state and seeing that the equipment interacts with the external parameters denoted by the vector Y ¯ , thus:

F i = F i ( X ¯ , Y ¯ ) , ( i = 1 , 2 , , n ) (16)

As the number of microstates large ( dim ( X ¯ ) n ), with the knowledge of diagnostic state parameters, we may predicate not more than statistical statements regarding the micro-state of the system characterized by the diagnostic state vector, since many kinds of micro-states may belong to the same macro-state, and these micro-states can quickly change in time.

This means that we can specify, at most, the probability that the micro-state falls into the ( X ¯ , X ¯ + d X ¯ ) interval at time t with a probability of w ( X ¯ , t ) , that is:

P ( X ¯ < ξ ¯ X ¯ + d X ¯ ) = w ( X ¯ , t ) d X (17)

As the micro-state of the system may change rapidly over time, the diagnostic state parameters of (16) fluctuate in time; consequently, they are stochastic variables.

Such variables can be characterized in the simplest way by the

F i = ( X ) F i ( X ¯ , Y ¯ ) w ( X , t ) d X , ( i = 1 , 2 , , n ) (18)

mean value and the

σ F i = ( F i F i ) 2 , ( i = 1 , 2 , , n ) (19)

Equation (16) mean-square deviation.

In accordance with the Tshebyshev theorem, the probability of | F i F i | > a is

P ( | F i F i | > a ) σ F i 2 a 2 = ( F i F i ) 2 a 2 (20)

if σ F i is very small, then we may conclude from the above inequality that the probability of the deviation is small; in this way, Fi and the average of Equation (18) coincide in practice.

If the above case is not true, we may choose the procedure of characterizing the

f i = F i F i , ( i = 1 , 2 , , n ) (21)

functions expressing the stochastic fluctuations. In engineering practice, these are characterized by the power-density spectrum. An additional advantage of this description is that we may conclude from the distortion of power spectrum density the occurrence of some future error, even if, on the basis of the average of diagnostic state vector, the system can be considered as adequate.

4. Stochastic Description

Let us suppose that the fluctuation introduced earlier can be divided into the sum of semiperiodic stochastic processes on different time scales that are statistically independent. Clearly, the semiperiodic stochastic processes on different time scales have different frequency scales as well.

We assume that every component process like this is statistically self-similar.

The X ( t ) stochastic process is memory-less if the increment of

X ( t + d t ) X ( t ) (22)

can be expressed in the form of

X ( t + d t ) X ( t ) = Θ [ X ( t ) , t , d t ] (23)

In general, this is a Markov process (Jaynes, 2003).

Let us assume that Θ [ X ( t ) , t , d t ] is a smooth function of the X , t , d t variables and X ( t ) is continuous, then:

lim d t 0 X ( t + d t ) = X ( t ) (24)

The stochastic process is self-similar in the sense of [11] if the difference can be divided into the sum of statistically independent increments. Then, they have a normal distribution within the interval. Here, we may see also the Markov character: memory-less and recursive.

X ( t + d t ) X ( t ) = Θ [ X ( t ) , t , d t ] = i = 1 n X ( t + i d t n ) X ( t + ( i 1 ) d t n ) = i = 1 n Θ [ X ( t + ( i 1 ) d t n ) , ( i 1 ) d t n , d t n ] (25)

Since d t can be chosen as arbitrarily small, the t i 1 = t + ( i 1 ) d t n times can

approach t arbitrarily by choosing a suitable high value for n. Therefore, we get from our above equation, for adequately high n, by utilizing the continuity that

t i 1 t , X ( t i 1 ) = X ( t ) , Θ [ X ( t ) , t , d t ] = i = 1 n Θ i [ X ( t ) , t , d t n ] (26)

Here, we may consider the Θ i [ X ( t ) , t , d t n ] expressions as the representations of Θ [ X ( t ) , t , d t n ] variables. These are statistically independent because the process is memory-less. Since n is arbitrarily high, we may conclude from the central limit distribution theorem that Θ [ X ( t ) , t , d t ] is the sum of n statistically independent Θ i [ X ( t ) , t , d t n ] stochastic variables. That is, the stochastic variable has a normal distribution. In accordance with the above, this is also true for the Θ [ X ( t ) , t , d t n ] stochastic variables.

We may conclude the following properties from the properties of stochastic variables with normal distributions.

Θ [ X ( t ) , t , d t ] = n Θ [ X ( t ) , t , d t n ] σ 2 ( Θ [ X ( t ) , t , d t ] ) = n σ 2 ( Θ [ X ( t ) , t , d t n ] ) (27)

where áñ denotes the mean-value and σ2() is the mean-square deviation. The solution of the function equations are:

Θ [ X ( t ) , t , d t ] = A [ X ( t ) , t ] d t σ 2 ( Θ [ X ( t ) , t , d t ] ) = D [ X ( t ) , t ] d t (28)

where A and D are smooth functions of X and t ; and D is positive. By taking into consideration the normality of X ( t + d t ) X ( t ) = Θ [ X ( t ) , t , d t ] and the above results, we get from Equation (23) that

X ( t + d t ) X ( t ) = Θ [ X ( t ) , t , d t ] = N [ A ( X , t ) d t , D ( X , t ) d t ] = A ( X , t ) d t + D 1 2 N ( 0 , 1 ) d t 1 2 (29)

where N ( 0 , 1 ) is a normally distributed stochastic process of zero average and unitary mean-square deviation:

N ( x ) = 1 π σ x exp ( ln 2 ( x x ) 2 σ 2 ) (30)

If we change over to the differential equation, we get the

d X d t = A ( X , t ) + D 1 2 ( X , t ) Γ ( t ) (31)

non-homogeneous equation, where

Γ ( t ) = lim d t 0 N ( 0 , d t 1 ) (32)

is a normally distributed white noise. This is a generalized Langevin equation.

Let us take the simplest one from these stochastic processes:

d X d t = 1 τ X + D 1 2 Γ ( t ) (33)

Equation (33) describes the Ornstein-Uhlenbeck stochastic process [12] . The mean value decays exponentially, and there is a white noise thereon that drives it. This equation describes the noise of a system comprising an energy accumulator (e.g. mass, revolving mass, capacitor, inductivity) and a linear attenuation (e.g. resistance of medium, ohmic resistance) excited by white noise.

On the basis of simple consideration that the power spectral density of the Ornstein-Uhlenbeck process:

S ( ω , τ ) = D τ 2 1 + ( ω τ ) 2 (34)

where τ is the time constant of the system and the spectral density is similar to (8). At the same time, this can be considered as the natural time scale of the stochastic process. Let us introduce a frequency scale by applying the definition:

λ = 1 τ (35)

To characterize the stochastic processes, let us take that G ( λ ) d λ is the number of stochastic processes falling into the frequency interval of ( λ , λ + d λ ) , then for the energy spectrum of the stochastic processes falling into the ( λ 2 , λ 1 ) interval of frequency scales we have that

S ( ω , λ 1 , λ 2 ) = λ 1 λ 2 D G ( λ ) λ 2 + ( ω ) 2 d λ (36)

If the distribution is uniform, namely, if

G ( λ ) d λ = d λ λ 2 λ 1 (37)

we get that

S ( f , λ 1 , λ 2 ) = λ 1 λ 2 D G ( λ ) λ 2 + ω 2 d λ = { D , h a 0 < ω λ 1 λ 2 D π 2 ω ( λ 2 λ 1 ) , h a λ 1 ω λ 2 D ω 2 , h a λ 1 λ 2 ω (38)

This is a well-known result with the effect of white, pink and Wiener's noise in the first, second and third interval, respectively.

We can choose a time interval from a representation where the noise is similar to the original one, and within this, we may choose an interval where the noise is similar to the noise of the interval from where we carried out the previous selection. Mathematically, this means that we can carry out the scaling of the frequency of the chosen component noise in such a way that it will be similar to a noise component of other frequency scale. It follows that the distribution function can be scaled in a self-similar way. (Of course, this cannot be applied to every scale, but we can find a scale whereon the distribution function of the system can be scaled.)

G ( λ ) = γ N G ( λ N ) (39)

This means that the distribution function can be overlapped with the distribution function taken on the τ scale by the uniform enlarging of ordinate values on the λ N frequency scale.

We may see easily that the solution of the above functions equation takes the form of

G ( λ ) = A ( λ ) λ 1 + α , α = ln 1 γ ln N (40)

where A ( λ ) is the periodical function of the scale whereon the distribution function is self-similar. Namely,

A ( λ ) = A ( λ N ) (41)

For the sake of simplicity, let us suppose that this function is invariable, and calculate again the energy spectrum of the stochastic processes falling into the ( λ 2 , λ 1 ) frequency interval. From this we get that

S ( f , λ 1 , λ 2 ) = λ 1 λ 2 D G ( λ ) λ 2 + ω 2 d λ = λ 1 λ 2 D A ( λ 2 + ω 2 ) λ 1 + α d λ = D A ω 2 + α λ 1 / ω λ 2 / ω 1 [ ( λ ω ) 2 + 1 ] ( λ ω ) 1 + α d λ ω (42)

The integral can be expressed by using the hyper-geometric functions; however, it is not easy to find out a descriptive meaning. For this reason, let us perform the integration for the ( 0 , ) interval. With the exception of the pink noise, the result will be a finite constant. We get the expected result by using this approach.

S ( ω , λ 1 , λ 2 ) = D A ω 2 + α λ 1 / ω λ 2 / ω 1 [ ( λ ω ) 2 + 1 ] ( λ ω ) 1 + α d λ ω D A ω 2 + α 0 1 [ ( λ ω ) 2 + 1 ] ( λ ω ) 1 + α d λ ω 1 ω 2 + α (43)

Consequently, the self-similar function is the condition for getting the

S ( ω ) 1 ω β ( β = 2 + α ) (44)

power spectral density.

How can these distribution functions be considered as universal? In order to prove the universality of this result, we need to carry out a universality test. We are going to show that this can be derived from any distribution function converging on zero in the case of high λ frequencies.

Let us take indeed that g ( λ ) is a function like this, namely that this is a fully general, not necessarily lognormal function, where

lim λ g ( λ ) = 0 (45)

In the dynamic events of a collective system, the movements by time are built strictly on each other, and the whole course can be derived only in a recursive way (causality principle). Philosophically, this means that the time of a given process or system can be described only by the superimposing order of interactions (not by the order of side-by-side existence). This means that the interactions are built on each other and set off the wearing (progress of time) process. Therefore, the time (on single-variable and causality basis) is composed of superimposed and successive recurrent steps. The progress of time corresponds to the process of wearing. This―at the same time―corresponds to the recursive self-organization resulting in the formation of Mandelbrot set. Consequently, we get a specific self-organization characteristic of the system, which makes the relevant system specific, and in a certain respect, distinguishable from the other ones. In this manner, this self-similar solution corresponds to the thermodynamic notion of entropy.

According to the recursive causality idea [13] [14] , let us generate a distribution function using the recursive method, as the process was described above:

g i ( ξ i ) = γ N g i 1 ( ξ i 1 N ) , ( i = 1 , 2 , ) g 0 ( ξ 0 ) = g ( λ ) (46)

By using these functions, let us generate the

G ( λ ) = ( 1 γ ) i = 1 g i ( ξ i ) = ( 1 γ ) [ g ( λ ) + γ N g ( λ N ) + ( γ N ) 2 g ( λ N 2 ) + ] (47)

distribution function. It is easy to show that this complies with the

G ( λ ) = γ N G ( λ N ) + ( 1 γ ) g ( λ ) (48)

functions equation. In accordance with our limitation for high λ-values (see (45)), the value of g ( λ ) is tending to zero, so we get the functions equation

G ( λ ) = γ N G ( λ N ) (49)

which expresses exactly the self-similar property.

5. The Generation of Colored Noise Is Not Univocal

We derived above the 1 / f α colored noise from the Orstein-Uhlenbeck process. Now, we are going to show that the colored noise could be derived from the Lorentz process as well, expecting that, in this case, the distributions of the individual time constants will differ.

It follows from this that, the lognormal distribution is not a significant demand for the 1 / f α noise. In order to prove this, let us take the other most simple process from among the self-similar stochastic ones:

d X d t = 1 τ X + D 1 2 Γ ( t ) , D = D 0 τ (50)

This is called the Lorentz stochastic process. Here, as we saw earlier, Γ ( t ) = lim d t 0 N ( 0 , d t 1 ) is a white noise of normal distribution. We have seen on the basis of simple consideration that the power spectral density of the process is as follows (similar again to (8)):

S ( ω , τ ) = D 0 τ 1 + ( ω τ ) 2 (51)

In this case, τ is the time constant of the system generating the stochastic signal. This can be considered as the natural time scale of the stochastic process as well as to the effect that this gives information on the character of change of the two-point correlation function of the stochastic process.

Indeed, we know that the power spectral density of the signal equals the Fourier transform of its correlation function. We get from this and Equation (51) the two-point correlation function:

C X X ( ϑ ) = F 1 [ S ( ω , τ ) ] = F 1 [ D 0 τ 1 + ( ω τ ) 2 ] = D 0 e ϑ τ (52)

where F-1 denotes the inverse Fourier transformation. Therefore, the degree of correlation decreases exponentially by the τ time constant. Because of this property, τ is called the time-correlation length. Let us suppose that G ( τ ) d τ is the number of statistically independent stochastic processes falling into the interval of ( τ , τ + d τ ) time-correlation length; thus, the resultant energy spectrum falling into the ( 0 , ) interval is:

S ( ω ) = 0 D 0 τ G ( τ ) 1 + ( τ ω ) 2 d τ (53)

If the distribution is scale-invariant, and if we require only the self-similarity, then the probability (e.g. for the density function) is scale independent:

G ( τ ) d τ = G ( α τ ) d α τ d α τ α τ = d τ τ (54)

so

G ( τ ) d τ = d τ τ (55)

Hence, by using the

0 1 1 + ( τ ω ) 2 d τ = π 2 1 ω (56)

improper integral, from (52) we get the expected result:

S ( ω ) = 0 D 0 τ G ( τ ) 1 + ( τ ω ) 2 d τ = D 0 0 τ 1 τ 1 + ( τ ω ) 2 d τ = D 0 π 2 1 ω 1 f (57)

In more general, if we suppose that

G ( α τ ) = α β G ( τ ) (58)

then

G ( τ ) = τ β (59)

If we require only the self-similarity, we get from Equation (53) and Equation (59) that the noise spectrum of signals falling into the ( 0 , ) interval is:

S ( ω ) = 0 D 0 τ G ( τ ) 1 + ( τ ω ) 2 d τ = 0 D 0 τ β + 1 1 + ( τ ω ) 2 d τ (60)

Because of the physical representation, it is advisable to convert the integral into the following form:

S ( ω ) = 0 D 0 τ β + 1 1 + ( τ ω ) 2 d τ = D 0 ω β + 2 0 ( ω τ ) β + 1 1 + ( τ ω ) 2 d ( ω τ ) (61)

For this integral, we are not able to give a general solution. Fortunately, in our case, the improper integral can be expressed in a closed form if 0 < β + 2 < 2 , Namely,

0 ( ω τ ) β + 1 1 + ( τ ω ) 2 d ( ω τ ) = π 2 sin ( ( β + 2 ) π 2 ) = A (62)

from Equation (61) and Equation (62) we obtain:

S ( ω ) = D 0 ω β + 2 0 ( ω τ ) β + 1 1 + ( τ ω ) 2 d ( ω τ ) = D 0 A ω β + 2 (63)

Consequently, the self-similar function is the condition for getting the

S ( ω ) 1 ω α (64)

power spectral density. Advantage of the applied Lorenzian process instead of the Ornstein-Uhlenbeck one is its fixed boundary conditions for integration, no arbitrary cut-off is necessary for finite integrals.

The foregoing can be generalized. Namely, if we depart from the stochastic process described by the

d X d t = 1 τ X + D 1 2 Γ ( t ) , D = D 0 τ γ (65)

equation instead of Equation (50), where Γ ( t ) = lim d t 0 N ( 0 , d t 1 ) is the normally distributed white noise.

Then, on the basis of simple consideration, we can see that the power spectral density will have the

S ( ω , τ ) = D 0 τ 2 γ 1 + ( ω τ ) 2 (66)

form. If we require only self-similarity, we get from Equation (66) and Equation (59) thatthe noise spectrum of signals falling into the ( 0 , ) interval is

S ( ω ) = 0 D 0 τ 2 γ G ( τ ) 1 + ( τ ω ) 2 d τ = 0 D 0 τ β γ + 2 1 + ( τ ω ) 2 d τ (67)

Because of the physical representation, it is advisable to convert the integral into the

S ( ω ) = 0 D 0 τ β γ + 2 1 + ( τ ω ) 2 d τ = D 0 ω β γ + 3 0 ( ω τ ) β γ + 2 1 + ( τ ω ) 2 d ( ω τ ) (68)

form.

In our case, the improper integral can be expressed again in a closed form if 0 < β γ + 3 < 2 . That is:

0 ( ω τ ) β γ + 2 1 + ( τ ω ) 2 d ( ω τ ) = π 2 sin ( ( β γ + 3 ) π 2 ) = A (69)

Now, from Equation (61) we have that

S ( ω ) = D 0 ω β γ + 3 0 ( ω τ ) β γ + 2 1 + ( τ ω ) 2 d ( ω τ ) = D 0 A ω β γ + 3 (70)

Therefore, from the self-similarity, we get again the

S ( ω ) 1 ω α (71)

power spectral density of the colored noise!

From this, we may draw the conclusion that self-similarity can be considered as a fundamental property in the generation of colored noises, and the existence of self-similarity alone is a satisfactory condition for its presence; neither the underlying stochastic processes nor the distributions have a role in generating this phenomenon.

6. Connection between the Fluctuation and the Induced Noise Theory

Above, we derived the 1 / f α noise from the noise spectrum of system driven by white noise, while van der [14] [15] [16] derived the colored noise from the fluctuations. Next, we are going to show the equivalence of these two methods, namely the white noise powered and the fluctuation gained systems are both have colored noise spectra.

The thermo-dynamic fluctuations can be characterized by macroscopic fluctuation quantity. The field range in which the fluctuation is generated is not uniform regarding the fluctuation quantity; however, it is in a state of equilibrium in every point. This latter means that, among the field ranges, the exchange of extensive quantities characteristic of the fluctuation can be neglected during the relaxation time of equilibration. An additional characteristic of the thermo-dynamic fluctuations is that the fluctuation lasts for a finite duration, and the rate of change of the individual a i , ( i = 1 , 2 , , n ) extensive parameters during the fluctuation can be expressed by the extensive quantities participating in the fluctuation.

d a i d t = f ( a 1 , a 2 , , a n ) , ( i = 1 , 2 , , n ) (72)

The relaxation time of which is much longer than that of the others, then the fluctuation can be described by a sole extensive parameter. Let us suppose that Equation (72) is linear and the system returns to its equilibrium, then the equation describes a completely deterministic and noiseless fluctuation process of one variable:

d a d t = λ a (73)

The solution of this equation will be as follows:

a ( t ) = a ( 0 ) e λ t (74)

Then, the correlation function is

f a a ( τ ) = a ( τ ) a ( 0 ) = [ a ( 0 ) ] 2 e λ | τ | (75)

and the power spectral density of this:

S ( i ω ) = f a a ( τ ) e i ω τ d τ = [ a ( 0 ) ] 2 λ λ 2 + ω 2 (76)

How do we have the noise from deterministic conditions? A deterministic process generates a fluctuation, and its spectrum is deterministic. The expected noise is not deterministic, and it is not a fluctuation―it is a noise. To make a comparison, we suppose that the deterministic fluctuation signal randomly repeats itself. The constructed noise is the series of randomly repeating deterministic signals. If we introduce a white noise function into the deterministic equation (the result is the Langevin equation), then the amplitude of the white noise spectrum can be chosen in such a way that it corresponds to the noise spectrum generated by the deterministic and random repetition frequency. The same is true also for the correlation function. This is a white noise and 1/ω2 Brownian noise for small and high ω values, respectively.

The power spectral density of the random series of such fluctuations differs only in one proportionality coefficient, as explained in [2] . If the distribution of λ frequencies is uniform, then the resultant spectrum will have white noise, 1/f and Brownian noise at the first part, in the middle and at the tail, respectively. Shlesinger also departs from this type of fluctuation [5] , but he writes the equation Equation (73) in the equivalent form:

d a d t = λ a = 1 τ a (77)

In this case, instead of (76), the spectrum is:

S ( i ω ) = f a a ( τ ) e i ω τ d τ = [ a ( 0 ) ] 2 τ 1 + ( τ ω ) 2 (78)

Supposing (like we did before in part No. 1.) that the probability density function of the time correlation length is lognormal, we get the resultant noise spectrum of 1 / f α . We may conclude, that the deterministic nature of this process is not an essential request to get colored noise spectrum; if we suppose that there is a random series of such fluctuations, in the same way as in Equation (51).

Introducing, e.g. a stochastic exciting term into Equation (77):

d a d t = 1 τ a + q ( t ) (79)

We state: the spectrum of the signal shall correspond to the power spectral density of fluctuation Equation (78); and this condition can always be fulfilled. In order to prove this, let us calculate the Fourier transform of the Equation (79). Then we get:

( i ω + 1 τ ) a = q ( ω ) a ( ω ) = τ 1 + ( i ω τ ) q ( ω ) (80)

From this, we have the power spectral density:

S ( ω ) = τ 2 1 + ( ω τ ) 2 | q ( ω ) | 2 (81)

We may see that if

q ( ω ) = a ( 0 ) τ (82)

is chosen, it leads us to the expected result. Consequently, if q ( t ) is a white noise of a ( 0 ) τ amplitude, then the noise spectrum of the signal is equivalent to

the noise spectrum of fluctuation! Moreover, two stochastic processes are equivalent if their noise spectra are the same.

The above monitoring could be suitable for numerous technical solutions of operating and controlling the complex systems, like:

・ by using the parameter predetermined on the system of proper operation we can observe the state of system completeness,

・ to replace the complicated system of multi-sensor observation (at the same time, for the specification of the place of fault we have to use local sensors, but in a more integrated measurement groups, as without the use of procedure in accordance with this invention),

・ to forecast the trends indicating the possible faults,

・ to observe the trend of system wearing-out (lifetime),

・ to measure during the planning process the implementation degree of the uniform dynamical load by using the exponent approximating (-1),

・ to explore the „usual”, suddenly occurring changes, usage faults and unauthorized usage (e.g. a non-qualified person intervenes and modifies the invariant quantity even if it does not result in operation fault, e.g. manual change gearbox of cars).

The present results are applicable to such complex systems as living organisms. The fractal physiology controls a living system by the time-fractal analysis [17] [18] [19] , which is equivalent to the above noise/fluctuation approach. The analysis of a normally functioning living organism shows that these noises are self-similar according to their time scale. As it is shown [20] [21] , it can discover the abnormalities very early, and/or it is able to check the ageing-status of the human body [22] . Earlier, we had investigated the method theoretically [23] [24] .

The future of the present investigation could lead to the “inverse treat”, when the colored-noise signal of the initially fixed properly working system forced during the functioning could be useful keeping increasing the faultless lifetime to operate. This idea is applied in the modulated electro-hyperthermia treatment process, when the modulation mimics the healthy homeostatic noise [25] .

7. Conclusion

Our present study shows the possibility of measuring a system-dependent invariant (scale-independent) parameter, which characterizes the actual status of the whole complex system, informs about the interactive “harmony” of the system and makes it possible to check the proper function of the system as a complex unit. We observed that the noise contains the entire dynamics and practically every dynamical variable of the whole system, the interactions of which contribute to the generation of the given (desired/useful) signal. Therefore, we may examine the system as a whole and analyze the operation of the system on the base of its noise spectrum. All the faults arising from wear, tear and fatigue processes (in general, through stochastic changes) will result in the continuous change of the noise spectrum. Therefore, measurement of the noise spectrum allows the prediction of wear/tear (fatigue etc.) processes. This information facilitates control of the given system among the concretely functioning conditions, including its evolutionary trend, predicting the possible failures or lifetime thresholds in time of the proper function without statistically determined system- independent data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Robinson, F.N.H. (1974) Noise and Fluctuations. Clarendon Press, Oxford.
[2] Freeman, J.J. (1958) Principles of Noise. John Wiley & Sons, Inc.
[3] Reif, F. (1965) Statistical and Thermal Physics. McGraw Hill, New York.
[4] Voss, R.F. (1993) 1/f Noise and Fractals in DNA-Base Sequences. In: Crilly, A.J., Earnshaw, R.A., Jones, H., Eds., Application of Fractals and Chaos, The Shape of Things, Springer-Verlag, Berlin, Heidelberg, 7-20.
https://doi.org/10.1007/978-3-642-78097-4_2
[5] van der Ziel, A. (1950) On the Noise Spectra of Semi-Conductor Noise and of Flicker Effect. Physica, 16, 359-372.
https://doi.org/10.1016/0031-8914(50)90078-4
[6] Shlesinger, M.F. (1987) Fractal Time and 1/f Noise in Complex Systems. Annals of the New York Academy of Sciences, 504, 214-225.
https://doi.org/10.1111/j.1749-6632.1987.tb48734.x
[7] Vicsek, T. (2001) Fluctuations and Scaling in Biology. Oxford University Press.
[8] Brown, J.H. and West, G.B. (2000) Scaling in Biology. Oxford University Press.
[9] Calder III, W.A. (1996) Size, Function and Life History. Dover Publications Inc. Mineola, New York.
[10] Li, W. (1989) Spatial 1/f Spectra in Open Dynamical Systems. Europhysics Letters, 10, 395-400.
https://doi.org/10.1209/0295-5075/10/5/001
[11] Gillespie, D.T. (1996) The Mathematics of Brown Motion and Johnson Noise. American Journal of Physics, 64, 225-240.
https://doi.org/10.1119/1.18210
[12] Uhlenbeck, G.E. and Ornstein, L.S. (1930) On the Theory of Brownian Motion. Physical Review, 36, 823-841.
https://doi.org/10.1103/PhysRev.36.823
[13] Harney, H.L. (2003) Bayesian Inference. Springer Verlag, Berlin, Heidelberg, New York.
https://doi.org/10.1007/978-3-662-06006-3
[14] Jaynes, E.T. (2003) Probability Theory, the Logic of Science. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511790423
[15] Shlesinger, M.F. and West, B.J. (1988) 1/f versus 1/fα Noise. In: Stanley, H.E. and Ostrowsky, N., Eds., Random Fluctuations and Pattern Growth, Experiments and Models, Kluwer Academic Publishers, Dordrecht Boston London.
https://doi.org/10.1007/978-94-009-2653-0_45
[16] Milotti, E. (2002) 1/fα Noise: A Pedagogical Review.
[17] West, B.J. (1990) Fractal Physiology and Chaos in Medicine. World Scientific, Singapore, London.
https://doi.org/10.1142/1025
[18] Bassingthwaighte, J.B., Liebovitch, L.S. and West, B.J. (1994) Fractal Physiology. Oxford University Press, New York, Oxford.
https://doi.org/10.1007/978-1-4614-7572-9
[19] Musha, T. and Sawada, Y. (1994) Physics of the Living State. IOS Press, Amsterdam.
[20] Wagner, C.D., Mrowka, R., Nafz, B. and Persson, P.B. (1995) Complexity and “Chaos” in Blood Pressure after Baroreceptor Denervation of Conscious Dogs. American Journal of Physiology, 269, H1760-H1766.
[21] Butler, G.C., Yamamoto, Y., Xing, H.C., Northey, D.R. and Hughson, R.L. (1993) Heart Rate Variability and Fractal Dimension during Orthostatic Challenges. Journal of Applied Physiology, 75, 2602-2612.
[22] Walleczek, J. (2000) Self-Organized Biological Dynamics and Non-Linear Control. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511535338
[23] Szendro, P., Vincze, G. and Szasz, A. (2001) Pink-Noise Behaviour of Biosystems. European Biophysics, 30, 227-231.
[24] Szendro, P., Vincze, G. and Szasz, A. (2001) Bio-Response on White-Noise Excitation. Electro and Magnetobiology, 20, 215-229.
https://doi.org/10.1081/JBC-100104145
[25] Szasz, O., Andocs, G. and Meggyeshazi, N. (2013) Modulation Effect in Oncothermia. Conference Papers in Medicine, 2013, Article ID: 395678.
http://www.hindawi.com/archive/2013/398678/

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.