Scientific Research

An Academic Publisher

**Intrinsic Noise Monitoring of Complex Systems** ()

Keywords

Share and Cite:

*Open Journal of Biophysics*,

**7**, 197-215. doi: 10.4236/ojbiphy.2017.74015.

1. Introduction

Testing the proper functioning of complex systems during their long-term service as they age and undergo wear and tear is of vital importance with respect to preventive maintenance and operation life. The problem is rather complex and complicated, as we must make conclusions about the properties of a given system from the results of its test: we must specify the characteristic values of the type in question by using the operation data of several single parts. This problem can be practically traced back to a single root: the systems used are open and connected to their environment through a number of interactions; therefore, they cannot be considered as closed, even for the duration of a measurement. They are definitely open from an energetic point of view, (energy exchange with the environment). Interactions indispensable to the operation (on which the effect of the system is directed, retroactive effects); influences of environment (environmental loads, e.g. temperature, contamination, pressure, rain etc.), as well as the effect of the user’s habits and conditions (e.g. early morning usage, usual usage order, effects of usual intensity, direction etc.) all affect the ageing of the actual system. A permanent control and maintenance service is needed to keep the complex function active. We will show that the characteristic values of energy input (feeding and take out), as well as the non-deprivable substantial characteristics, can be used to check the general process.

Measurements of any dynamic effects are always noisy. The desired signal (electrical, mechanical, etc.) and the measured one differ. The measurement clarity is characterised in these cases by the signal/noise ratio. Dynamic effects and changes could be noise-free only in the case of very simple and reversible cases (in energetically closed systems). This is practically a theoretical idealization, because in reality, noise is always present as the random or systematic fluctuation of the given signal (measured, set, used etc.) [1] [2] .

The noise source is composed of many-sided interactions; the continuous energy and entropy/information exchange of open dynamic systems, the mutual dependence of the single subsystems and the actual noise spectrum are formed in a synergetic way [3] . Consequently, the desired effect is accompanied in every real case by the noise spectrum composed of the specific features of the dynamic systems. Thus, the noise is a certain degree of appearance of parameters, processes, dynamic behaviour etc. always arising, but not directly involved in the given examination.

In the course of the usual wearing tests and quality examinations, each element of the system is examined separately by using several sensors, and during this measurement, one tries to eliminate or minimize the noise. Consequently, the aim of these measurement procedures is to filter the noises and create the best possible signal-to-noise ratio in order to obtain the most exact information possible regarding the given partial system.

In the case of open, dissipative systems (basically, every occurrence realizing not spontaneous thermo-dynamical changes, e.g. heat engines, biological systems, electromagnetic radiators etc.) the reduction of noise is impossible by fixing the interactions, because the open, dissipative feature assumes the definite interaction with the environment. For this reason, in real, irreversible dynamic systems, we may consider only the second possibility, namely that we must reckon with noise anyway, and―at the most―the chosen dynamical methods may suppress the noise and bring out the „useful” signal as far as possible.

The noise, however, provides information on the interactions (inside and/or outside the system) of the examined system. In this case, the measured signal is not the useless noise, but the fluctuation properties, which could carry the systemic changes in the complex system.

Our objective is to obtain information on all the dynamics of complex systems in order that they may be used for the planning processes and qualitative examinations. Our concept is based on the recognition that the all the dynamics are included in the noise, and practically all those dynamic variables appear therein, the interactions of which have a share in the creation of the given (desired/useful) signal. Moreover, the noise spectrum gives account of the correlations within the system. Therefore, the examination can be carried out on the whole system, and the system’s operation can be analysed from its noise spectrum. All the failures arising because of wear, tear and fatigue processes (in general through stochastic changes) result in the continuous change of the noise spectrum. Therefore, the assumption of the noise spectrum allows the prediction of the wear and tear (fatigue etc.) processes.

All standard paper components have been specified for three reasons: 1) ease of use when formatting individual papers, 2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and 3) conformity of style throughout a journal paper. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow.

2. Simple Derivation and Description of Colored Noises

Although a complex system has a great exchange of information with its environment, it can be, in general, characterized by a stationary state, that is, by a state in dynamic equilibrium. Therefore, the dynamic equilibrium characterizing the appropriate operation can describe the time-dependent effect $\left(H\left(x,t\right)\right)$ as a fluctuation around the average, that is:

$H\left(x,t\right)=\langle H\left(x,t\right)\rangle +\delta \left(H\left(x,t\right)\right)$ (1)

where $\langle H\left(x,t\right)\rangle $ denotes the averaging, and $\delta \left(H\left(x,t\right)\right)$ is the actual deviation from the average (fluctuation). (Of course, there are also dynamic and non-equilibrium systems (e.g. explosives); however, their effects are measurable as a rough average (e.g. relative destroying effect measurable e.g. in the dynamite equivalent), which is also specified by a fluctuation around the average.) Later on, we are going to examine the time behavior of the process with a specified (fixed) x; therefore, the variable x will not be indicated hereafter.

The process is random if the variable is stochastic, and in this case, the power density function of process H is

${S}_{H}\left(f\right)=\frac{{\left|H\left(f\right){H}^{*}\left(f\right)\right|}^{2}}{\Delta f}$ (2)

where $\Delta f$ is the effective band-width of the Fourier integral, * denotes a conjugate and

$H\left(f\right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\mathrm{exp}\left(-2i\text{\pi}ft\right)H\left(t\right)\text{d}t}$ (3)

In other words, according to [4] Voss:

${S}_{H}\left(f\right)=\mathrm{Re}\left\{{\displaystyle \underset{0}{\overset{\infty}{\int}}\mathrm{exp}\left(-2i\text{\pi}ft\right){C}_{H}\left(t\right)\text{d}t}\right\}$ (4)

where ${C}_{H}\left(t\right)=\langle \delta \left(H\left(t\right)\right)\cdot \delta \left(H\left(0\right)\right)\rangle $ is the autocorrelation (pair-correlation) function of process H between two points of time, that is:

$\begin{array}{c}{C}_{H}\left(t\right)=\langle \left(H\left(t\right)-\langle H\left(t\right)\rangle \right)\left(H\left({t}_{0}\right)-\langle H\left({t}_{0}\right)\rangle \right)\rangle \\ =\langle H\left({t}_{0}\right)H\left({t}_{0}+t\right)\rangle -{\langle H\left({t}_{0}\right)\rangle}^{2}\end{array}$ (5)

The functions ${S}_{H}\left(f\right)$ and ${C}_{H}\left(t\right)$ are naturally not independent, as in addition to Equation (4), on the basis of the Wiener-Khintchine relationship [1] [3] the following is valid:

${C}_{H}\left(t\right)=\mathrm{Re}\left\{{\displaystyle \underset{0}{\overset{\infty}{\int}}\mathrm{exp}\left(-2i\text{\pi}ft\right){S}_{H}\left(f\right)\text{d}f}\right\}$ (6)

If the ${C}_{H}\left(t\right)$ correlation function decays by the time constant t (of course, this is a requirement in the majority of real cases), namely:

${C}_{H}\left(t\right)=\mathrm{exp}\left(-\frac{t}{\tau}\right)$ (7)

then

${S}_{H}\left(f\right)=\frac{\tau}{1+{\left(\omega \tau \right)}^{2}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\omega =2\text{\pi}f\right]$ (8)

If $\left(\omega \tau \right)\ll \text{1}$ (this is valid in the case of fast decay of correlation or at low frequencies), then $S\left(f\right)$ is constant, the noise is independent of the frequency and we get so-called white noise.

If multiple fluctuations exist with randomly fluctuated time constants of the correlation Equation (7), and ${D}_{H}\left(\tau \right)$ denotes its distribution for the given process H, then

$\begin{array}{c}{S}_{H}\left(\omega \right)=\mathrm{Re}\left\{{\displaystyle \underset{0}{\overset{\infty}{\int}}{\displaystyle \underset{{\tau}_{\mathrm{min}}}{\overset{{\tau}_{\mathrm{max}}}{\int}}\mathrm{exp}\left(-i\omega t\right)\mathrm{exp}\left(-\frac{t}{\tau}\right){D}_{H}\left(\tau \right)\text{d}\tau \text{d}t}}\right\}\\ ={\displaystyle \underset{{\tau}_{\mathrm{min}}}{\overset{{\tau}_{\mathrm{max}}}{\int}}\frac{\tau}{1+{\left(\omega \tau \right)}^{2}}{D}_{H}\left(\tau \right)\text{d}\tau}\end{array}$ (9)

If the ${D}_{H}\left(\tau \right)$ function is scale invariant, namely if, for example, van der [5]

${D}_{H}\left(\tau \right)\text{d}\tau =\frac{\text{d}\tau}{\tau}$ (10)

then we get so-called pink noise (Flicker noise, 1/f noise, etc.):

${S}_{H}\left(f\right)=\frac{1}{f}$ (11)

In the given multiple fluctuation case, the relationship Equation (9) cannot be normalized, so

$\underset{0}{\overset{\infty}{\int}}{D}_{H}\left(\tau \right)\text{d}\tau}\ne \text{const}.$ (12)

Therefore, a frequency cut-off shall be carried out at the high and low boundaries. We can demonstrate as well [6] that the lognormal distribution of ${D}_{H}\left(\tau \right)$ :

${D}_{H}\left(\tau \right)=\frac{1}{\text{\pi}\tau \sigma}\mathrm{exp}\left(\frac{-{\left[\mathrm{log}\left(\frac{\tau}{\langle \tau \rangle}\right)\right]}^{2}}{2{\sigma}^{2}}\right)$ (13)

Results in 1/f noise, as shown in Equation (11). In this case, there is no normalization problem, and the cut-off is uncertain. We may demonstrate as well [4] that the distribution of the product of random distribution variables is always lognormal, thus it results every time in Equation (11). However, if there is no dominant lognormal distribution, but the system can be described by a wide range of distributions, then

${S}_{H}\left(f\right)=\frac{1}{{f}^{\alpha}}$ (14)

where basically, we cannot fix the value of α because it can also depend considerably on the general parameters of the system (e.g. temperature, pressure etc.) [4] . In this way, the slope of the log-log scale representation of Equation (14), we get directly the character of the noise of process H.

$\alpha =\frac{\mathrm{log}\left[{S}_{H}\left(f\right)\right]}{\mathrm{log}\left[f\right]}$ (15)

The changing noise of the dynamical variables-as the spectrum characterizes the system―assumes the existence of a certain order and self-organization in the examined system. The self-organization comes into existence by the mutual determinacy and during the complex operation of partial systems building on one another, requiring the work of others and determining the dynamics of the other ones by causality. Complex systems with many-sided connections to their environment are in a non-equilibrium, non-stationary state and have a high-level hierarchical structure. The subsystems forming the structure are connected to each other in many ways through their physical and chemical processes or other information network. The amplitude of physical and chemical information signals generated by various individual subsystems, their characteristic time or other properties can change over a wide range, e.g. the simplest biological systems show a great variety of processes on the individual characteristic time scales and are connected to each other by scaling [7] [8] [9] . As we have shown, the noises carry dynamical information about the operating systems and may also give information on the wearing phases of the given structure. In general, the noise of any open dynamical systems could be scaled by the 1/fα spectrum [10] .

3. Fluctuation of Diagnostic Quantities

Every complex system can be decomposed into numerous simple subsystems, the state of which can be described by some physical parameters that characterize the subsystems. This means that the state of the whole system is known if we know the state of its every subsystem. Let us denote by $\stackrel{\xaf}{X}$ the vector made of the state parameters of subsystems, hereinafter called the micro-state of the system. In terms of diagnostics, the selectable and limited number of ${F}_{i}$ values are of measurable quantities and characteristic of the macro system, on the base of which we can judge if the functioning of the system complies with the requirements or not. These quantities are called macroscopic diagnostic state parameters, and the $\stackrel{\xaf}{F}$ vector made of them is the so-called diagnostic state vector. As the number of these vectors is significantly fewer than the number of state parameters serving for the description of the micro-state, from a microscopic point of view, the system described by applying the diagnostic state parameters is not complete.

Let us assume that the functions of diagnostic state parameters can be described as a function of the micro-state and seeing that the equipment interacts with the external parameters denoted by the vector $\stackrel{\xaf}{Y}$ , thus:

${F}_{i}={F}_{i}\left(\stackrel{\xaf}{X},\stackrel{\xaf}{Y}\right),\left(i=1,2,\cdots ,n\right)$ (16)

As the number of microstates large ( $\mathrm{dim}\left(\stackrel{\xaf}{X}\right)\gg n$ ), with the knowledge of diagnostic state parameters, we may predicate not more than statistical statements regarding the micro-state of the system characterized by the diagnostic state vector, since many kinds of micro-states may belong to the same macro-state, and these micro-states can quickly change in time.

This means that we can specify, at most, the probability that the micro-state falls into the $\left(\stackrel{\xaf}{X},\stackrel{\xaf}{X}+\text{d}\stackrel{\xaf}{X}\right)$ interval at time t with a probability of $w\left(\stackrel{\xaf}{X},t\right)$ , that is:

$P\left(\stackrel{\xaf}{X}<\stackrel{\xaf}{\xi}\le \stackrel{\xaf}{X}+\text{d}\stackrel{\xaf}{X}\right)=w\left(\stackrel{\xaf}{X},t\right)\text{d}X$ (17)

As the micro-state of the system may change rapidly over time, the diagnostic state parameters of (16) fluctuate in time; consequently, they are stochastic variables.

Such variables can be characterized in the simplest way by the

$\langle {F}_{i}\rangle ={\displaystyle \underset{\left(X\right)}{\int}{F}_{i}\left(\stackrel{\xaf}{X},\stackrel{\xaf}{Y}\right)w\left(X,t\right)\text{d}X},\text{\hspace{1em}}\left(i=1,2,\cdots ,n\right)$ (18)

mean value and the

${\sigma}_{{F}_{i}}=\sqrt{\langle {\left({F}_{i}-\langle {F}_{i}\rangle \right)}^{2}\rangle},\text{\hspace{1em}}\left(i=1,2,\cdots ,n\right)$ (19)

Equation (16) mean-square deviation.

In accordance with the Tshebyshev theorem, the probability of $\left|{F}_{i}-\langle {F}_{i}\rangle \right|>a$ is

$P\left(\left|{F}_{i}-\langle {F}_{i}\rangle \right|>a\right)\le \frac{{\sigma}_{{F}_{i}}^{2}}{{a}^{2}}=\frac{{\left({F}_{i}-\langle {F}_{i}\rangle \right)}^{2}}{{a}^{2}}$ (20)

if
${\sigma}_{{F}_{i}}$ is very small, then we may conclude from the above inequality that the probability of the deviation is small; in this way, F_{i} and the average of Equation (18) coincide in practice.

If the above case is not true, we may choose the procedure of characterizing the

${f}_{i}={F}_{i}-\langle {F}_{i}\rangle ,\text{\hspace{1em}}\left(i=1,2,\cdots ,n\right)$ (21)

functions expressing the stochastic fluctuations. In engineering practice, these are characterized by the power-density spectrum. An additional advantage of this description is that we may conclude from the distortion of power spectrum density the occurrence of some future error, even if, on the basis of the average of diagnostic state vector, the system can be considered as adequate.

4. Stochastic Description

Let us suppose that the fluctuation introduced earlier can be divided into the sum of semiperiodic stochastic processes on different time scales that are statistically independent. Clearly, the semiperiodic stochastic processes on different time scales have different frequency scales as well.

We assume that every component process like this is statistically self-similar.

The $X\left(t\right)$ stochastic process is memory-less if the increment of

$X\left(t+\text{d}t\right)-X\left(t\right)$ (22)

can be expressed in the form of

$X\left(t+\text{d}t\right)-X\left(t\right)=\Theta \left[X\left(t\right),t,\text{d}t\right]$ (23)

In general, this is a Markov process (Jaynes, 2003).

Let us assume that $\Theta \left[X\left(t\right),t,\text{d}t\right]$ is a smooth function of the $X,t,\text{d}t$ variables and $X\left(t\right)$ is continuous, then:

$\underset{\text{d}t\to 0}{\mathrm{lim}}X\left(t+\text{d}t\right)=X\left(t\right)$ (24)

The stochastic process is self-similar in the sense of [11] if the difference can be divided into the sum of statistically independent increments. Then, they have a normal distribution within the interval. Here, we may see also the Markov character: memory-less and recursive.

$\begin{array}{l}X\left(t+\text{d}t\right)-X\left(t\right)=\Theta \left[X\left(t\right),t,\text{d}t\right]\\ ={\displaystyle \underset{i=1}{\overset{n}{\sum}}X\left(t+i\frac{\text{d}t}{n}\right)-X\left(t+\left(i-1\right)\frac{\text{d}t}{n}\right)}\\ ={\displaystyle \underset{i=1}{\overset{n}{\sum}}\Theta \left[X\left(t+\left(i-1\right)\frac{\text{d}t}{n}\right),\left(i-1\right)\frac{\text{d}t}{n},\frac{\text{d}t}{n}\right]}\end{array}$ (25)

Since $\text{d}t$ can be chosen as arbitrarily small, the ${t}_{i-1}=t+\left(i-1\right)\frac{\text{d}t}{n}$ times can

approach $t$ arbitrarily by choosing a suitable high value for n. Therefore, we get from our above equation, for adequately high n, by utilizing the continuity that

$\begin{array}{l}{t}_{i-1}\to t,\text{\hspace{1em}}X\left({t}_{i-1}\right)=X\left(t\right),\\ \Theta \left[X\left(t\right),t,\text{d}t\right]={\displaystyle \underset{i=1}{\overset{n}{\sum}}{\Theta}_{i}\left[X\left(t\right),t,\frac{\text{d}t}{n}\right]}\end{array}$ (26)

Here, we may consider the ${\Theta}_{i}\left[X\left(t\right),t,\frac{\text{d}t}{n}\right]$ expressions as the representations of $\Theta \left[X\left(t\right),t,\frac{\text{d}t}{n}\right]$ variables. These are statistically independent because the process is memory-less. Since n is arbitrarily high, we may conclude from the central limit distribution theorem that $\Theta \left[X\left(t\right),t,\text{d}t\right]$ is the sum of n statistically independent ${\Theta}_{i}\left[X\left(t\right),t,\frac{\text{d}t}{n}\right]$ stochastic variables. That is, the stochastic variable has a normal distribution. In accordance with the above, this is also true for the $\Theta \left[X\left(t\right),t,\frac{\text{d}t}{n}\right]$ stochastic variables.

We may conclude the following properties from the properties of stochastic variables with normal distributions.

$\begin{array}{l}\langle \Theta \left[X\left(t\right),t,\text{d}t\right]\rangle =n\langle \Theta \left[X\left(t\right),t,\frac{\text{d}t}{n}\right]\rangle \\ {\sigma}^{2}\left(\Theta \left[X\left(t\right),t,\text{d}t\right]\right)=n{\sigma}^{2}\left(\Theta \left[X\left(t\right),t,\frac{\text{d}t}{n}\right]\right)\end{array}$ (27)

where áñ denotes the mean-value and σ^{2}() is the mean-square deviation. The solution of the function equations are:

$\begin{array}{l}\langle \Theta \left[X\left(t\right),t,\text{d}t\right]\rangle =A\left[X\left(t\right),t\right]\text{d}t\\ {\sigma}^{2}\left(\Theta \left[X\left(t\right),t,\text{d}t\right]\right)=D\left[X\left(t\right),t\right]\text{d}t\end{array}$ (28)

where $A$ and $D$ are smooth functions of $X$ and $t$ ; and $D$ is positive. By taking into consideration the normality of $X\left(t+\text{d}t\right)-X\left(t\right)=\Theta \left[X\left(t\right),t,\text{d}t\right]$ and the above results, we get from Equation (23) that

$\begin{array}{c}X\left(t+\text{d}t\right)-X\left(t\right)=\Theta \left[X\left(t\right),t,\text{d}t\right]=N\left[A\left(X,t\right)\text{d}t,D\left(X,t\right)\text{d}t\right]\\ =A\left(X,t\right)\text{d}t+{D}^{\frac{1}{2}}N\left(0,1\right)\text{d}{t}^{\frac{1}{2}}\end{array}$ (29)

where $N\left(0,1\right)$ is a normally distributed stochastic process of zero average and unitary mean-square deviation:

$N\left(x\right)=\frac{1}{\text{\pi}\sigma x}\mathrm{exp}\left(-\frac{{\mathrm{ln}}^{2}\left(\frac{x}{\langle x\rangle}\right)}{2{\sigma}^{2}}\right)$ (30)

If we change over to the differential equation, we get the

$\frac{\text{d}X}{\text{d}t}=A\left(X,t\right)+{D}^{\frac{1}{2}}\left(X,t\right)\Gamma \left(t\right)$ (31)

non-homogeneous equation, where

$\Gamma \left(t\right)=\underset{dt\to 0}{\mathrm{lim}}N\left(0,\text{d}{t}^{-1}\right)$ (32)

is a normally distributed white noise. This is a generalized Langevin equation.

Let us take the simplest one from these stochastic processes:

$\frac{\text{d}X}{\text{d}t}=-\frac{1}{\tau}X+{D}^{\frac{1}{2}}\Gamma \left(t\right)$ (33)

Equation (33) describes the Ornstein-Uhlenbeck stochastic process [12] . The mean value decays exponentially, and there is a white noise thereon that drives it. This equation describes the noise of a system comprising an energy accumulator (e.g. mass, revolving mass, capacitor, inductivity) and a linear attenuation (e.g. resistance of medium, ohmic resistance) excited by white noise.

On the basis of simple consideration that the power spectral density of the Ornstein-Uhlenbeck process:

$S\left(\omega ,\tau \right)=\frac{D{\tau}^{2}}{1+{\left(\omega \tau \right)}^{2}}$ (34)

where $\tau $ is the time constant of the system and the spectral density is similar to (8). At the same time, this can be considered as the natural time scale of the stochastic process. Let us introduce a frequency scale by applying the definition:

$\lambda =\frac{1}{\tau}$ (35)

To characterize the stochastic processes, let us take that $G\left(\lambda \right)\text{d}\lambda $ is the number of stochastic processes falling into the frequency interval of $\left(\lambda ,\lambda +\text{d}\lambda \right)$ , then for the energy spectrum of the stochastic processes falling into the $\left({\lambda}_{2},{\lambda}_{1}\right)$ interval of frequency scales we have that

$S\left(\omega ,{\lambda}_{1},{\lambda}_{2}\right)={\displaystyle \underset{{\lambda}_{1}}{\overset{{\lambda}_{2}}{\int}}\frac{DG\left(\lambda \right)}{{\lambda}^{2}+{\left(\omega \right)}^{2}}\text{d}\lambda}$ (36)

If the distribution is uniform, namely, if

$G\left(\lambda \right)\text{d}\lambda =\frac{\text{d}\lambda}{{\lambda}_{2}-{\lambda}_{1}}$ (37)

we get that

$S\left(f,{\lambda}_{1},{\lambda}_{2}\right)={\displaystyle \underset{{\lambda}_{1}}{\overset{{\lambda}_{2}}{\int}}\frac{DG\left(\lambda \right)}{{\lambda}^{2}+{\omega}^{2}}\text{d}\lambda}=\{\begin{array}{l}D,ha\text{\hspace{1em}}0<\omega \ll {\lambda}_{1}\ll {\lambda}_{2}\\ \frac{D\text{\pi}}{2\omega \left({\lambda}_{2}-{\lambda}_{1}\right)},ha\text{\hspace{1em}}{\lambda}_{1}\ll \omega \ll {\lambda}_{2}\\ \frac{D}{{\omega}^{2}},ha\text{\hspace{1em}}{\lambda}_{1}\ll {\lambda}_{2}\ll \omega \end{array}$ (38)

This is a well-known result with the effect of white, pink and Wiener's noise in the first, second and third interval, respectively.

We can choose a time interval from a representation where the noise is similar to the original one, and within this, we may choose an interval where the noise is similar to the noise of the interval from where we carried out the previous selection. Mathematically, this means that we can carry out the scaling of the frequency of the chosen component noise in such a way that it will be similar to a noise component of other frequency scale. It follows that the distribution function can be scaled in a self-similar way. (Of course, this cannot be applied to every scale, but we can find a scale whereon the distribution function of the system can be scaled.)

$G\left(\lambda \right)=\frac{\gamma}{N}G\left(\frac{\lambda}{N}\right)$ (39)

This means that the distribution function can be overlapped with the distribution function taken on the $\tau $ scale by the uniform enlarging of ordinate values on the $\frac{\lambda}{N}$ frequency scale.

We may see easily that the solution of the above functions equation takes the form of

$G\left(\lambda \right)=\frac{A\left(\lambda \right)}{{\lambda}^{1+\alpha}},\text{\hspace{1em}}\alpha =\frac{\mathrm{ln}\frac{1}{\gamma}}{\mathrm{ln}N}$ (40)

where $A\left(\lambda \right)$ is the periodical function of the scale whereon the distribution function is self-similar. Namely,

$A\left(\lambda \right)=A\left(\frac{\lambda}{N}\right)$ (41)

For the sake of simplicity, let us suppose that this function is invariable, and calculate again the energy spectrum of the stochastic processes falling into the $\left({\lambda}_{2},{\lambda}_{1}\right)$ frequency interval. From this we get that

$\begin{array}{l}S\left(f,{\lambda}_{1},{\lambda}_{2}\right)={\displaystyle \underset{{\lambda}_{1}}{\overset{{\lambda}_{2}}{\int}}\frac{DG\left(\lambda \right)}{{\lambda}^{2}+{\omega}^{2}}\text{d}\lambda}={\displaystyle \underset{{\lambda}_{1}}{\overset{{\lambda}_{2}}{\int}}\frac{DA}{\left({\lambda}^{2}+{\omega}^{2}\right){\lambda}^{1+\alpha}}\text{d}\lambda}\\ =\frac{DA}{{\omega}^{2+\alpha}}{\displaystyle \underset{{\lambda}_{1}/\omega}{\overset{{\lambda}_{2}/\omega}{\int}}\frac{1}{\left[{\left(\frac{\lambda}{\omega}\right)}^{2}+1\right]{\left(\frac{\lambda}{\omega}\right)}^{1+\alpha}}\text{d}\frac{\lambda}{\omega}}\end{array}$ (42)

The integral can be expressed by using the hyper-geometric functions; however, it is not easy to find out a descriptive meaning. For this reason, let us perform the integration for the $\left(0,\infty \right)$ interval. With the exception of the pink noise, the result will be a finite constant. We get the expected result by using this approach.

$\begin{array}{c}S\left(\omega ,{\lambda}_{1},{\lambda}_{2}\right)=\frac{DA}{{\omega}^{2+\alpha}}{\displaystyle \underset{{\lambda}_{1}/\omega}{\overset{{\lambda}_{2}/\omega}{\int}}\frac{1}{\left[{\left(\frac{\lambda}{\omega}\right)}^{2}+1\right]{\left(\frac{\lambda}{\omega}\right)}^{1+\alpha}}\text{d}\frac{\lambda}{\omega}}\\ \approx \frac{DA}{{\omega}^{2+\alpha}}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{1}{\left[{\left(\frac{\lambda}{\omega}\right)}^{2}+1\right]{\left(\frac{\lambda}{\omega}\right)}^{1+\alpha}}\text{d}\frac{\lambda}{\omega}}\\ \propto \frac{1}{{\omega}^{2+\alpha}}\end{array}$ (43)

Consequently, the self-similar function is the condition for getting the

$S\left(\omega \right)\propto \frac{1}{{\omega}^{\beta}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(\beta =2+\alpha \right)$ (44)

power spectral density.

How can these distribution functions be considered as universal? In order to prove the universality of this result, we need to carry out a universality test. We are going to show that this can be derived from any distribution function converging on zero in the case of high $\lambda $ frequencies.

Let us take indeed that $g\left(\lambda \right)$ is a function like this, namely that this is a fully general, not necessarily lognormal function, where

$\underset{\lambda \to \infty}{\mathrm{lim}}g\left(\lambda \right)=0$ (45)

In the dynamic events of a collective system, the movements by time are built strictly on each other, and the whole course can be derived only in a recursive way (causality principle). Philosophically, this means that the time of a given process or system can be described only by the superimposing order of interactions (not by the order of side-by-side existence). This means that the interactions are built on each other and set off the wearing (progress of time) process. Therefore, the time (on single-variable and causality basis) is composed of superimposed and successive recurrent steps. The progress of time corresponds to the process of wearing. This―at the same time―corresponds to the recursive self-organization resulting in the formation of Mandelbrot set. Consequently, we get a specific self-organization characteristic of the system, which makes the relevant system specific, and in a certain respect, distinguishable from the other ones. In this manner, this self-similar solution corresponds to the thermodynamic notion of entropy.

According to the recursive causality idea [13] [14] , let us generate a distribution function using the recursive method, as the process was described above:

$\begin{array}{l}{g}_{i}\left({\xi}_{i}\right)=\frac{\gamma}{N}{g}_{i-1}\left(\frac{{\xi}_{i-1}}{N}\right),\text{\hspace{1em}}\left(i=1,2,\cdots \right)\\ {g}_{0}\left({\xi}_{0}\right)=g\left(\lambda \right)\end{array}$ (46)

By using these functions, let us generate the

$\begin{array}{c}G\left(\lambda \right)=\left(1-\gamma \right){\displaystyle \underset{i=1}{\overset{\infty}{\sum}}{g}_{i}\left({\xi}_{i}\right)}\\ =\left(1-\gamma \right)\left[g\left(\lambda \right)+\frac{\gamma}{N}g\left(\frac{\lambda}{N}\right)+{\left(\frac{\gamma}{N}\right)}^{2}g\left(\frac{\lambda}{{N}^{2}}\right)+\cdots \right]\end{array}$ (47)

distribution function. It is easy to show that this complies with the

$G\left(\lambda \right)=\frac{\gamma}{N}G\left(\frac{\lambda}{N}\right)+\left(1-\gamma \right)g\left(\lambda \right)$ (48)

functions equation. In accordance with our limitation for high λ-values (see (45)), the value of $g\left(\lambda \right)$ is tending to zero, so we get the functions equation

$G\left(\lambda \right)=\frac{\gamma}{N}G\left(\frac{\lambda}{N}\right)$ (49)

which expresses exactly the self-similar property.

5. The Generation of Colored Noise Is Not Univocal

We derived above the $1/{f}^{\alpha}$ colored noise from the Orstein-Uhlenbeck process. Now, we are going to show that the colored noise could be derived from the Lorentz process as well, expecting that, in this case, the distributions of the individual time constants will differ.

It follows from this that, the lognormal distribution is not a significant demand for the $1/{f}^{\alpha}$ noise. In order to prove this, let us take the other most simple process from among the self-similar stochastic ones:

$\begin{array}{l}\frac{\text{d}X}{\text{d}t}=-\frac{1}{\tau}X+{D}^{\frac{1}{2}}\Gamma \left(t\right),\\ D=\frac{{D}_{0}}{\sqrt{\tau}}\end{array}$ (50)

This is called the Lorentz stochastic process. Here, as we saw earlier, $\Gamma \left(t\right)=\underset{\text{d}t\to 0}{\mathrm{lim}}N\left(0,\text{d}{t}^{-1}\right)$ is a white noise of normal distribution. We have seen on the basis of simple consideration that the power spectral density of the process is as follows (similar again to (8)):

$S\left(\omega ,\tau \right)=\frac{{D}_{0}\tau}{1+{\left(\omega \tau \right)}^{2}}$ (51)

In this case, $\tau $ is the time constant of the system generating the stochastic signal. This can be considered as the natural time scale of the stochastic process as well as to the effect that this gives information on the character of change of the two-point correlation function of the stochastic process.

Indeed, we know that the power spectral density of the signal equals the Fourier transform of its correlation function. We get from this and Equation (51) the two-point correlation function:

${C}_{XX}\left(\vartheta \right)={F}^{-1}\left[S\left(\omega ,\tau \right)\right]={F}^{-1}\left[\frac{{D}_{0}\tau}{1+{\left(\omega \tau \right)}^{2}}\right]={D}_{0}{\text{e}}^{-\frac{\vartheta}{\tau}}$ (52)

where F^{-1} denotes the inverse Fourier transformation. Therefore, the degree of correlation decreases exponentially by the
$\tau $ time constant. Because of this property,
$\tau $ is called the time-correlation length. Let us suppose that
$G\left(\tau \right)\text{d}\tau $ is the number of statistically independent stochastic processes falling into the interval of
$\left(\tau ,\tau +\text{d}\tau \right)$ time-correlation length; thus, the resultant energy spectrum falling into the
$\left(0,\infty \right)$ interval is:

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}\tau G\left(\tau \right)}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}$ (53)

If the distribution is scale-invariant, and if we require only the self-similarity, then the probability (e.g. for the density function) is scale independent:

$G\left(\tau \right)\text{d}\tau =G\left(\alpha \tau \right)\text{d}\alpha \tau \Rightarrow \frac{\text{d}\alpha \tau}{\alpha \tau}=\frac{\text{d}\tau}{\tau}$ (54)

so

$G\left(\tau \right)\text{d}\tau =\frac{\text{d}\tau}{\tau}$ (55)

Hence, by using the

$\underset{0}{\overset{\infty}{\int}}\frac{1}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}=\frac{\text{\pi}}{2}\frac{1}{\omega$ (56)

improper integral, from (52) we get the expected result:

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}\tau G\left(\tau \right)}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}={D}_{0}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{\tau \frac{1}{\tau}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}=\frac{{D}_{0}\text{\pi}}{2}\frac{1}{\omega}\propto \frac{1}{f}$ (57)

In more general, if we suppose that

$G\left(\alpha \tau \right)={\alpha}^{\beta}G\left(\tau \right)$ (58)

then

$G\left(\tau \right)={\tau}^{\beta}$ (59)

If we require only the self-similarity, we get from Equation (53) and Equation (59) that the noise spectrum of signals falling into the $\left(0,\infty \right)$ interval is:

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}\tau G\left(\tau \right)}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}{\tau}^{\beta +1}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}$ (60)

Because of the physical representation, it is advisable to convert the integral into the following form:

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}{\tau}^{\beta +1}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}=\frac{{D}_{0}}{{\omega}^{\beta +2}}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta +1}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}$ (61)

For this integral, we are not able to give a general solution. Fortunately, in our case, the improper integral can be expressed in a closed form if $0<\beta +2<2$ , Namely,

$\underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta +1}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}=\frac{\text{\pi}}{2\mathrm{sin}\left(\frac{\left(\beta +2\right)\text{\pi}}{2}\right)}=A$ (62)

from Equation (61) and Equation (62) we obtain:

$S\left(\omega \right)=\frac{{D}_{0}}{{\omega}^{\beta +2}}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta +1}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}=\frac{{D}_{0}A}{{\omega}^{\beta +2}}$ (63)

Consequently, the self-similar function is the condition for getting the

$S\left(\omega \right)\propto \frac{1}{{\omega}^{\alpha}}$ (64)

power spectral density. Advantage of the applied Lorenzian process instead of the Ornstein-Uhlenbeck one is its fixed boundary conditions for integration, no arbitrary cut-off is necessary for finite integrals.

The foregoing can be generalized. Namely, if we depart from the stochastic process described by the

$\begin{array}{l}\frac{\text{d}X}{\text{d}t}=-\frac{1}{\tau}X+{D}^{\frac{1}{2}}\Gamma \left(t\right),\\ D=\frac{{D}_{0}}{{\tau}^{\gamma}}\end{array}$ (65)

equation instead of Equation (50), where $\Gamma \left(t\right)=\underset{\text{d}t\to 0}{\mathrm{lim}}N\left(0,\text{d}{t}^{-1}\right)$ is the normally distributed white noise.

Then, on the basis of simple consideration, we can see that the power spectral density will have the

$S\left(\omega ,\tau \right)=\frac{{D}_{0}{\tau}^{2-\gamma}}{1+{\left(\omega \tau \right)}^{2}}$ (66)

form. If we require only self-similarity, we get from Equation (66) and Equation (59) thatthe noise spectrum of signals falling into the $\left(0,\infty \right)$ interval is

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}{\tau}^{2-\gamma}G\left(\tau \right)}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}{\tau}^{\beta -\gamma +2}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}$ (67)

Because of the physical representation, it is advisable to convert the integral into the

$S\left(\omega \right)={\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{D}_{0}{\tau}^{\beta -\gamma +2}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\tau}=\frac{{D}_{0}}{{\omega}^{\beta -\gamma +3}}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta -\gamma +2}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}$ (68)

form.

In our case, the improper integral can be expressed again in a closed form if $0<\beta -\gamma +3<2$ . That is:

$\underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta -\gamma +2}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}=\frac{\text{\pi}}{2\mathrm{sin}\left(\frac{\left(\beta -\gamma +3\right)\text{\pi}}{2}\right)}=A$ (69)

Now, from Equation (61) we have that

$S\left(\omega \right)=\frac{{D}_{0}}{{\omega}^{\beta -\gamma +3}}{\displaystyle \underset{0}{\overset{\infty}{\int}}\frac{{\left(\omega \tau \right)}^{\beta -\gamma +2}}{1+{\left(\tau \omega \right)}^{2}}\text{d}\left(\omega \tau \right)}=\frac{{D}_{0}A}{{\omega}^{\beta -\gamma +3}}$ (70)

Therefore, from the self-similarity, we get again the

$S\left(\omega \right)\propto \frac{1}{{\omega}^{\alpha}}$ (71)

power spectral density of the colored noise!

From this, we may draw the conclusion that self-similarity can be considered as a fundamental property in the generation of colored noises, and the existence of self-similarity alone is a satisfactory condition for its presence; neither the underlying stochastic processes nor the distributions have a role in generating this phenomenon.

6. Connection between the Fluctuation and the Induced Noise Theory

Above, we derived the $1/{f}^{\alpha}$ noise from the noise spectrum of system driven by white noise, while van der [14] [15] [16] derived the colored noise from the fluctuations. Next, we are going to show the equivalence of these two methods, namely the white noise powered and the fluctuation gained systems are both have colored noise spectra.

The thermo-dynamic fluctuations can be characterized by macroscopic fluctuation quantity. The field range in which the fluctuation is generated is not uniform regarding the fluctuation quantity; however, it is in a state of equilibrium in every point. This latter means that, among the field ranges, the exchange of extensive quantities characteristic of the fluctuation can be neglected during the relaxation time of equilibration. An additional characteristic of the thermo-dynamic fluctuations is that the fluctuation lasts for a finite duration, and the rate of change of the individual ${a}_{i},\left(i=1,2,\cdots ,n\right)$ extensive parameters during the fluctuation can be expressed by the extensive quantities participating in the fluctuation.

$\frac{\text{d}{a}_{i}}{\text{d}t}=f\left({a}_{1},{a}_{2},\cdots ,{a}_{n}\right),\text{\hspace{1em}}\left(i=1,2,\cdots ,n\right)$ (72)

The relaxation time of which is much longer than that of the others, then the fluctuation can be described by a sole extensive parameter. Let us suppose that Equation (72) is linear and the system returns to its equilibrium, then the equation describes a completely deterministic and noiseless fluctuation process of one variable:

$\frac{\text{d}a}{\text{d}t}=-\lambda a$ (73)

The solution of this equation will be as follows:

$a\left(t\right)=a\left(0\right){\text{e}}^{-\lambda t}$ (74)

Then, the correlation function is

${f}_{aa}\left(\tau \right)=\langle a\left(\tau \right)a\left(0\right)\rangle ={\left[a\left(0\right)\right]}^{2}{\text{e}}^{-\lambda \left|\tau \right|}$ (75)

and the power spectral density of this:

$S\left(i\omega \right)={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{f}_{aa}\left(\tau \right){\text{e}}^{-i\omega \tau}\text{d}\tau ={\left[a\left(0\right)\right]}^{2}\frac{\lambda}{{\lambda}^{2}+{\omega}^{2}}}$ (76)

How do we have the noise from deterministic conditions? A deterministic process generates a fluctuation, and its spectrum is deterministic. The expected noise is not deterministic, and it is not a fluctuation―it is a noise. To make a comparison, we suppose that the deterministic fluctuation signal randomly repeats itself. The constructed noise is the series of randomly repeating deterministic signals. If we introduce a white noise function into the deterministic equation (the result is the Langevin equation), then the amplitude of the white noise spectrum can be chosen in such a way that it corresponds to the noise spectrum generated by the deterministic and random repetition frequency. The same is true also for the correlation function. This is a white noise and 1/ω^{2} Brownian noise for small and high ω values, respectively.

The power spectral density of the random series of such fluctuations differs only in one proportionality coefficient, as explained in [2] . If the distribution of λ frequencies is uniform, then the resultant spectrum will have white noise, 1/f and Brownian noise at the first part, in the middle and at the tail, respectively. Shlesinger also departs from this type of fluctuation [5] , but he writes the equation Equation (73) in the equivalent form:

$\frac{\text{d}a}{\text{d}t}=-\lambda a=-\frac{1}{\tau}a$ (77)

In this case, instead of (76), the spectrum is:

$S\left(i\omega \right)={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{f}_{aa}\left(\tau \right){\text{e}}^{-i\omega \tau}\text{d}\tau}={\left[a\left(0\right)\right]}^{2}\frac{\tau}{1+{\left(\tau \omega \right)}^{2}}$ (78)

Supposing (like we did before in part No. 1.) that the probability density function of the time correlation length is lognormal, we get the resultant noise spectrum of $1/{f}^{\alpha}$ . We may conclude, that the deterministic nature of this process is not an essential request to get colored noise spectrum; if we suppose that there is a random series of such fluctuations, in the same way as in Equation (51).

Introducing, e.g. a stochastic exciting term into Equation (77):

$\frac{\text{d}a}{\text{d}t}=-\frac{1}{\tau}a+q\left(t\right)$ (79)

We state: the spectrum of the signal shall correspond to the power spectral density of fluctuation Equation (78); and this condition can always be fulfilled. In order to prove this, let us calculate the Fourier transform of the Equation (79). Then we get:

$\left(i\omega +\frac{1}{\tau}\right)a=q\left(\omega \right)\to a\left(\omega \right)=\frac{\tau}{1+\left(i\omega \tau \right)}q\left(\omega \right)$ (80)

From this, we have the power spectral density:

$S\left(\omega \right)=\frac{{\tau}^{2}}{1+{\left(\omega \tau \right)}^{2}}{\left|q\left(\omega \right)\right|}^{2}$ (81)

We may see that if

$q\left(\omega \right)=\frac{a\left(0\right)}{\sqrt{\tau}}$ (82)

is chosen, it leads us to the expected result. Consequently, if $q\left(t\right)$ is a white noise of $\frac{a\left(0\right)}{\sqrt{\tau}}$ amplitude, then the noise spectrum of the signal is equivalent to

the noise spectrum of fluctuation! Moreover, two stochastic processes are equivalent if their noise spectra are the same.

The above monitoring could be suitable for numerous technical solutions of operating and controlling the complex systems, like:

・ by using the parameter predetermined on the system of proper operation we can observe the state of system completeness,

・ to replace the complicated system of multi-sensor observation (at the same time, for the specification of the place of fault we have to use local sensors, but in a more integrated measurement groups, as without the use of procedure in accordance with this invention),

・ to forecast the trends indicating the possible faults,

・ to observe the trend of system wearing-out (lifetime),

・ to measure during the planning process the implementation degree of the uniform dynamical load by using the exponent approximating (-1),

・ to explore the „usual”, suddenly occurring changes, usage faults and unauthorized usage (e.g. a non-qualified person intervenes and modifies the invariant quantity even if it does not result in operation fault, e.g. manual change gearbox of cars).

The present results are applicable to such complex systems as living organisms. The fractal physiology controls a living system by the time-fractal analysis [17] [18] [19] , which is equivalent to the above noise/fluctuation approach. The analysis of a normally functioning living organism shows that these noises are self-similar according to their time scale. As it is shown [20] [21] , it can discover the abnormalities very early, and/or it is able to check the ageing-status of the human body [22] . Earlier, we had investigated the method theoretically [23] [24] .

The future of the present investigation could lead to the “inverse treat”, when the colored-noise signal of the initially fixed properly working system forced during the functioning could be useful keeping increasing the faultless lifetime to operate. This idea is applied in the modulated electro-hyperthermia treatment process, when the modulation mimics the healthy homeostatic noise [25] .

7. Conclusion

Our present study shows the possibility of measuring a system-dependent invariant (scale-independent) parameter, which characterizes the actual status of the whole complex system, informs about the interactive “harmony” of the system and makes it possible to check the proper function of the system as a complex unit. We observed that the noise contains the entire dynamics and practically every dynamical variable of the whole system, the interactions of which contribute to the generation of the given (desired/useful) signal. Therefore, we may examine the system as a whole and analyze the operation of the system on the base of its noise spectrum. All the faults arising from wear, tear and fatigue processes (in general, through stochastic changes) will result in the continuous change of the noise spectrum. Therefore, measurement of the noise spectrum allows the prediction of wear/tear (fatigue etc.) processes. This information facilitates control of the given system among the concretely functioning conditions, including its evolutionary trend, predicting the possible failures or lifetime thresholds in time of the proper function without statistically determined system- independent data.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Robinson, F.N.H. (1974) Noise and Fluctuations. Clarendon Press, Oxford. |

[2] | Freeman, J.J. (1958) Principles of Noise. John Wiley & Sons, Inc. |

[3] | Reif, F. (1965) Statistical and Thermal Physics. McGraw Hill, New York. |

[4] |
Voss, R.F. (1993) 1/f Noise and Fractals in DNA-Base Sequences. In: Crilly, A.J., Earnshaw, R.A., Jones, H., Eds., Application of Fractals and Chaos, The Shape of Things, Springer-Verlag, Berlin, Heidelberg, 7-20. https://doi.org/10.1007/978-3-642-78097-4_2 |

[5] |
van der Ziel, A. (1950) On the Noise Spectra of Semi-Conductor Noise and of Flicker Effect. Physica, 16, 359-372. https://doi.org/10.1016/0031-8914(50)90078-4 |

[6] |
Shlesinger, M.F. (1987) Fractal Time and 1/f Noise in Complex Systems. Annals of the New York Academy of Sciences, 504, 214-225. https://doi.org/10.1111/j.1749-6632.1987.tb48734.x |

[7] | Vicsek, T. (2001) Fluctuations and Scaling in Biology. Oxford University Press. |

[8] | Brown, J.H. and West, G.B. (2000) Scaling in Biology. Oxford University Press. |

[9] | Calder III, W.A. (1996) Size, Function and Life History. Dover Publications Inc. Mineola, New York. |

[10] |
Li, W. (1989) Spatial 1/f Spectra in Open Dynamical Systems. Europhysics Letters, 10, 395-400. https://doi.org/10.1209/0295-5075/10/5/001 |

[11] |
Gillespie, D.T. (1996) The Mathematics of Brown Motion and Johnson Noise. American Journal of Physics, 64, 225-240. https://doi.org/10.1119/1.18210 |

[12] |
Uhlenbeck, G.E. and Ornstein, L.S. (1930) On the Theory of Brownian Motion. Physical Review, 36, 823-841. https://doi.org/10.1103/PhysRev.36.823 |

[13] |
Harney, H.L. (2003) Bayesian Inference. Springer Verlag, Berlin, Heidelberg, New York. https://doi.org/10.1007/978-3-662-06006-3 |

[14] |
Jaynes, E.T. (2003) Probability Theory, the Logic of Science. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511790423 |

[15] |
Shlesinger, M.F. and West, B.J. (1988) 1/f versus 1/fα Noise. In: Stanley, H.E. and Ostrowsky, N., Eds., Random Fluctuations and Pattern Growth, Experiments and Models, Kluwer Academic Publishers, Dordrecht Boston London. https://doi.org/10.1007/978-94-009-2653-0_45 |

[16] | Milotti, E. (2002) 1/fα Noise: A Pedagogical Review. |

[17] |
West, B.J. (1990) Fractal Physiology and Chaos in Medicine. World Scientific, Singapore, London. https://doi.org/10.1142/1025 |

[18] |
Bassingthwaighte, J.B., Liebovitch, L.S. and West, B.J. (1994) Fractal Physiology. Oxford University Press, New York, Oxford. https://doi.org/10.1007/978-1-4614-7572-9 |

[19] | Musha, T. and Sawada, Y. (1994) Physics of the Living State. IOS Press, Amsterdam. |

[20] | Wagner, C.D., Mrowka, R., Nafz, B. and Persson, P.B. (1995) Complexity and “Chaos” in Blood Pressure after Baroreceptor Denervation of Conscious Dogs. American Journal of Physiology, 269, H1760-H1766. |

[21] | Butler, G.C., Yamamoto, Y., Xing, H.C., Northey, D.R. and Hughson, R.L. (1993) Heart Rate Variability and Fractal Dimension during Orthostatic Challenges. Journal of Applied Physiology, 75, 2602-2612. |

[22] |
Walleczek, J. (2000) Self-Organized Biological Dynamics and Non-Linear Control. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511535338 |

[23] | Szendro, P., Vincze, G. and Szasz, A. (2001) Pink-Noise Behaviour of Biosystems. European Biophysics, 30, 227-231. |

[24] |
Szendro, P., Vincze, G. and Szasz, A. (2001) Bio-Response on White-Noise Excitation. Electro and Magnetobiology, 20, 215-229. https://doi.org/10.1081/JBC-100104145 |

[25] |
Szasz, O., Andocs, G. and Meggyeshazi, N. (2013) Modulation Effect in Oncothermia. Conference Papers in Medicine, 2013, Article ID: 395678. http://www.hindawi.com/archive/2013/398678/ |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.