Persistence Exponent for the Simple Diffusion Equation: The Exact Solution for Any Integer Dimension

Abstract

The persistence exponent for the simple diffusion equation , with random Gaussian initial condition, has been calculated exactly using a method known as selective averaging. The probability that the value of the field at a specified spatial coordinate remains positive throughout for a certain time t behaves as for asymptotically large time t. The value of , calculated here for any integer dimension d, is for and 1 otherwise. This exact theoretical result is being reported possibly for the first time and is not in agreement with the accepted values for respectively.

Share and Cite:

Sanyal, D. (2021) Persistence Exponent for the Simple Diffusion Equation: The Exact Solution for Any Integer Dimension. Journal of Modern Physics, 12, 1401-1408. doi: 10.4236/jmp.2021.1210083.

1. Introduction

The problem in the present paper is to find the persistence exponent for the simple diffusion equation ${\varphi }_{t}\left(x,t\right)=\Delta \varphi \left(x,t\right)$ . The diffusion equation is an equation that has no stochasticity. In the present problem, the stochasticity is introduced through the random initial conditions. The problem is about evaluating the probability of a certain event. The event is that $\varphi$ at a specified location remains positive throughout the time evolution till a certain time t i.e. the $\varphi$ at the location does not change sign even once. This probability for asymptotically large time is characterised by an exponent ${\theta }_{o}$ called the persistence exponent. Persistence exponent for the diffusion equation has been a subject of interest to physicists  -  etc., researchers in mathematics and statistics   etc. as well as experimentalists  . The interest in the persistence exponent is just not confined to the diffusion equation but to other areas of non-equilibrium physics. Among them random walk  , walk in a random environment with or without bias  , surface growth  , diffusing particle in a random potential with a small concentration of absorbers  , behaviour of financial markets  etc. are worth mentioning. There are few exact calculations for the persistence exponent in the literature. The case of a simple random walk in one dimension

gives the exponent ${\theta }_{o}=\frac{1}{2}$ . Even the calculation of persistence exponents for Gaussian processes may not be straight forward.

We revisit the problem of simple diffusion. It is strongly non-Markovian in nature. The problem involves the partial differential equation ${\varphi }_{t}=\Delta \varphi$ with random Gaussian initial conditions. It appears to remain an unsolved problem even though results   and several others have been reported. The problem of diffusion may require a better understanding in the context of persistence. The article tries to find an exact solution to the problem.

2. Simple Diffusion Equation, Random Initial Conditions and Persistence Exponent

The diffusion equation ${\varphi }_{t}=\Delta \varphi$ is a coarse grained differential equation whose solution is uniquely determined by the initial condition. In the present problem, the initial condition is not fixed but is chosen from a distribution. The initial value of $\varphi$ at every coordinate is chosen from a Gaussian distribution with mean 0, variance k and the initial values of $\varphi$ at any two coordinates are statistically independent.

In order to calculate persistence exponent ${\theta }_{o}$ we have to calculate the probability that the field $\varphi$ at a specified coordinate does not flip sign even once throughout a time t. This probability ${\mathcal{P}}^{+}\left(t\right)$ of $\varphi$ always remaining +ve behaves in the limit of asymptotically large time as ${\mathcal{P}}^{+}\left(t\right)\sim {t}^{-{\theta }_{o}}$ . This is true for a non-stationary process like in the present case. In this article any position x coordinate is a vector quantity in a d dimensional space. The moments of the initial condition distribution described above are given by

$〈\varphi \left(x,0\right)〉=0$ (1-a)

$〈\varphi \left({x}_{1},0\right)\varphi \left({x}_{2},0\right)〉=k{\delta }^{\left(d\right)}\left({x}_{1}-{x}_{2}\right)$ (1-b)

where k is the variance of the distribution. The solution for the diffusion equation may be written in terms of the initial condition as

$\varphi \left(x,t\right)=\int {\text{d}}^{d}{x}^{\prime }G\left(x-{x}^{\prime },t\right)\varphi \left({x}^{\prime },0\right)$ (2)

where $G\left(x,t\right)={\left(4\pi t\right)}^{-d/2}\mathrm{exp}\left(-{x}^{2}/4t\right)$ . The plan for the evaluation of the exponent is as follows. First, we have to calculate the probability of $\varphi$ attaining a specific final value $\beta$ at a certain $x={x}_{o}$ starting from a definite initial value $\alpha$ of $\varphi$ at $x={x}_{o}$ . In order to evaluate it we use the method of selective averaging. The paths that take the initial $\alpha$ to the final value $\beta$ also comprise those where $\varphi \left({x}_{o}\right)$ flips sign at least once during time evolution. The probability of such paths is to be subtracted out. Finally, there has to be an integration over the final $\beta$ from 0 to $\infty$ , followed by an integration over $\alpha$ from 0 to $\infty$ .

Selective averaging means averaging over the initial field $\varphi \left(x,0\right)$ , except when $x={x}_{o}$ . In other words, the averaging is done over all the initial configurations such that $\varphi$ at $x={x}_{o}$ is kept fixed at $\alpha$ (say) i.e. $\varphi \left({x}_{o},0\right)=\alpha$ while for $x\ne {x}_{o}$ $\varphi$ varies according to Gaussian distribution. In this paper the selective distribution, denoted by subscript s, is characterized by the moments,

${〈\varphi \left(x,0\right)〉}_{s}=\alpha {\delta }^{\left(d\right)}\left(x-{x}_{o}\right)$ (3-a)

${〈\varphi \left({x}_{1},0\right)\varphi \left({x}_{2},0\right)〉}_{s}=\left\{k+\left[{\alpha }^{2}-k\right]{\delta }^{\left(d\right)}\left({x}_{1}-{x}_{o}\right)\right\}{\delta }^{\left(d\right)}\left({x}_{1}-{x}_{2}\right)$ (3-b)

It may be verified from (3-a), (3-b) that if $x\ne {x}_{o}$ , ${x}_{1}\ne {x}_{o}$ , ${x}_{2}\ne {x}_{o}$ , we get (1-a), (1-b) and for $x={x}_{1}={x}_{2}={x}_{o}$ , (3-a), (3-b) give $\alpha$ , ${\alpha }^{2}$ as expected. Using (3-a), (3-b), we can calculate the moments of the random variable $\varphi \left({x}_{o},t\right)$ ,

${〈\varphi \left({x}_{o},t\right)〉}_{s}={\left(4\pi t\right)}^{-d/2}\alpha$ (4)

$\begin{array}{c}{〈{\varphi }^{2}\left({x}_{o},t\right)〉}_{s}=\int {\text{d}}^{d}{{x}^{\prime }}_{1}{\text{d}}^{d}{{x}^{\prime }}_{2}{\left(4\pi t\right)}^{-d}\mathrm{exp}\left[-\frac{{\left({x}_{o}-{{x}^{\prime }}_{1}\right)}^{2}}{4t}\right]\\ \text{\hspace{0.17em}}\text{ }×\mathrm{exp}\left[-\frac{{\left({x}_{o}-{{x}^{\prime }}_{2}\right)}^{2}}{4t}\right]{〈\varphi \left({{x}^{\prime }}_{1},0\right)\varphi \left({{x}^{\prime }}_{2},0\right)〉}_{s}\\ =k\int {\text{d}}^{d}{{x}^{\prime }}_{1}{\left(4\pi t\right)}^{-d}\mathrm{exp}\left[-\frac{{\left({x}_{o}-{{x}^{\prime }}_{1}\right)}^{2}}{2t}\right]-\frac{k}{{\left(4\pi t\right)}^{d}}+\frac{{\alpha }^{2}}{{\left(4\pi t\right)}^{d}}\end{array}$ (5)

While evaluating the second order moment, we have used the relation in (3-b). Hence the mean and the variance of the distribution for $\varphi \left({x}_{o},t\right)$ , represented by $\mu$ and ${\sigma }^{2}$ respectively, are

$\mu ={〈\varphi \left({x}_{o},t\right)〉}_{s}={\left(4\pi t\right)}^{-d/2}\alpha$ (6-a)

$\begin{array}{c}{\sigma }^{2}={〈{\varphi }^{2}\left({x}_{o},t\right)〉}_{s}-{〈\varphi \left({x}_{o},t\right)〉}_{s}^{2}\\ =k{\left(4\pi \right)}^{-d}{2}^{\left(d/2-1\right)}{k}_{d}\Gamma \left(d/2\right){t}^{-d/2}-k{\left(4\pi t\right)}^{-d}\end{array}$ (6-b)

In the above equation ${k}_{d}$ denotes the angular integration in d dimensional space while $\Gamma$ represents the usual Gamma function. It may be mentioned that $\varphi \left(x,t\right)$ in (2) is Gaussian irrespective of whether $\varphi \left({x}^{\prime },0\right)$ , the initial Gaussian field, is correlated or not. In the present case, though, the initial field is uncorrelated and $\varphi \left(x,t\right)$ can be proved to be Gaussian using characteristic functions in probability theory  . It may be noted that the $\delta$ function distribution is the limiting case of a Gaussian distribution. The expression for the conditional probability for starting at $\alpha$ and being between $\beta$ and $\beta +\text{d}\beta$ at time ${t}_{1}$ is

$P\left(\beta |\alpha \right)\text{d}\beta =\frac{1}{\sqrt{2\pi }\sigma }\mathrm{exp}\left[\frac{-{\left(\beta -\mu \right)}^{2}}{2{\sigma }^{2}}\right]\text{d}\beta$ (7)

where $\mu =\mu \left(\alpha ,{t}_{1}\right)$ and $\sigma =\sigma \left({t}_{1}\right)$ . This probability considers all the paths that start from $\alpha$ to be between $\beta$ and $\beta +\text{d}\beta$ at time ${t}_{1}$ including ones that flip en route $\beta$ as depicted in Figure 1. Figure 1 is the projection of the trajectory of the system in the infinite dimensional $\Phi -t$ space on to the $\varphi \left({x}_{o}\right)-t$ plane.

$A\left(0,\alpha \right)$ represents the starting point and $B\left({t}_{1},\beta \right)$ , the destination. AB represents a path along which $\varphi \left({x}_{o}\right)$ does not flip and ADB (blue curve) is a typical path along which $\varphi \left({x}_{o}\right)$ flips. Such paths have to be excluded. The probability of reaching from A to the neighborhood B at asymptotically large time ${t}_{1}$ without flipping is given by,

${P}^{+}\left(\beta |\alpha \right)\text{d}\beta =P\left(\beta |\alpha \right)\text{d}\beta -P\left(\beta |-\alpha \right)\left(1+O\left({t}^{-1}\right)\right)\text{d}\beta$ (8)

The second term represents the probability of paths such as $\stackrel{¯}{A}DB$ originating from $\stackrel{¯}{A}\left(0,-\alpha \right)$ and terminating in the neighborhood of B at ${t}_{1}$ . (8) is not be confused with the method of images in  . (8) follows a very different logic in the present case and holds good asymptotically. To prove (8) we will show that there is a one to one mapping from a path $A\to B$ to a path $\stackrel{¯}{A}\to B$ and that the probability of two such paths converges asymptotically. This part is explained in 1) in what follows. Further, to justify (8) we have to show that the “number” of paths $A\to B$ that flip and the “number” of paths $\stackrel{¯}{A}\to B$ converge asymptomatically. This is done in 2). In the subsequent analysis we will consider a d dimensional lattice—lattice spacing being infinitesimally small— instead of continuum for the sake of notational convenience only. The reason for (8) follows.

Figure 1. Projection of $\Phi -t$ trajectory onto the $\varphi \left({x}_{o}\right)-t$ plane.

1) An initial configuration at A of Figure 1 given by ${X}_{AB}=\left\{\cdots ,{\alpha }_{1},\alpha ,{\alpha }_{2},\cdots \right\}$ , is considered, where ${\alpha }_{1},{\alpha }_{2},\cdots$ are the initial values of $\varphi$ at coordinates

$x\ne {x}_{o}$ . The corresponding path takes initial $\varphi \left({x}_{o}\right)=\alpha$ to B, then it may be concluded from (2) that ${X}_{\stackrel{¯}{A}B}=\left\{\cdots ,f{\alpha }_{1},-\alpha ,f{\alpha }_{2},\cdots \right\}$ ( $f=\frac{\beta +{\left(4\pi t\right)}^{-d/2}\alpha }{\beta -{\left(4\pi t\right)}^{-d/2}\alpha }$ ) is a

configuration at $\stackrel{¯}{A}$ which takes initial $\varphi \left({x}_{o}\right)=-\alpha$ to B. Hence there is a one to one mapping of paths from $A\to B$ to those from $\stackrel{¯}{A}\to B$ . It may be underlined here that $f\to 1$ as $t\to \infty$ . This implies that the probability of the two paths approach each other asymptotically.

2) In this part we will address the fact that in the asymptotically large time limit it is a very good approximation to say that there is a one to one correspondence between the paths from A that flip to those from $\stackrel{¯}{A}\to B$ . This may be used as it is a controlled approximation for it improves with increasing t. In order to see this point let us consider a point $C\left({t}_{2},\beta \right)$ (not shown in the Figure 1) where ${t}_{2}>{t}_{1}$ . Let ${Y}_{AB}=\left\{\cdots ,{\gamma }_{1},\alpha ,{\gamma }_{2},\cdots \right\}$ be the initial configuration corresponding to path ADB ( the path in blue in Figure 1) where ${\gamma }_{1},{\gamma }_{2},\cdots$ are the initial values of $\varphi$ at coordinates $x\ne {x}_{o}$ . This path crosses zero while reaching B. It can be

shown that ${Y}_{AC}=\left\{\cdots ,{f}_{1}{\gamma }_{1},\alpha ,{f}_{1}{\gamma }_{2},\cdots \right\}$ ( ${f}_{1}={\left(\frac{{t}_{2}}{{t}_{1}}\right)}^{d/2}\frac{\beta -{\left(4\pi {t}_{2}\right)}^{-d/2}\alpha }{\beta -{\left(4\pi {t}_{1}\right)}^{-d/2}\alpha }$ ) is the

corresponding initial configuration for a path $A\to C$ . The exact expression for ${f}_{1}$ contains a coordinate dependent term whose leading order behavior for large t is 1. Since ${t}_{2}>{t}_{1}$ , we have ${f}_{1}>1$ for sufficiently large ${t}_{1}$ . Let the time coordinate at D be ${t}_{D}$ , then $\varphi \left({x}_{o},{t}_{D}\right)=0$ for the path ADB. Then one may arrive from (2) that $\varphi \left({x}_{o},{t}_{D}\right)<0$ for the initial configuration ${Y}_{AC}$ . Hence, one can conclude that the path corresponding to ${Y}_{AC}$ must have flipped at an earlier time than ${t}_{D}$ . Therefore, if a path from $A\to B$ flips, the corresponding path from $A\to C$ flips at an earlier time. Since ${t}_{2}>{t}_{1}$ , the “number” of paths flipping while going from $A\to C$ is more than those from $A\to B$ . Thus the “number” of paths from $A\to B$ that flip is a fraction ${f}_{2}$ of those from $\stackrel{¯}{A}\to B$ where ${f}_{2}=1-O\left({t}_{1}^{-a}\right)$ for large ${t}_{1}$ , a being some positive number.

On account of 1), 2) we say that the probability of the paths (like ADB in Figure 1) that flip while reaching B in the large time limit is given by $P\left(\beta |-\alpha \right){h}_{\text{correction}}$ , where ${h}_{\text{correction}}=1+O\left({t}^{-b}\right)$ , $b=1$ , being Taylor expansion in ${t}^{-1}$ . In principle, the coefficient of ${t}^{-b}$ may be a function of $\beta$ . When integrating over $\beta$ —as will be done later—the contribution to the integral comes from the vicinity of $\beta =-\mu \sim {t}^{-d/2}$ which is vanishingly small in the asymptotic limit. Also, $\text{d}\beta \sim \sigma \sim {t}^{-d/4}$ . The coefficient is Taylor expanded about $\beta =0$ and only the zeroth order term or the term independent of $\beta$ is retained. So the probability of $\varphi \left({x}_{o}\right)$ not changing sign when reaching the neighborhood ( $\text{d}\beta$ ) of B is, for asymtotically large time,

$\begin{array}{l}\underset{t\to \infty }{lim}\left[P\left(\beta |\alpha \right)\text{d}\beta -P\left(\beta |-\alpha \right){h}_{\text{correction}}\text{d}\beta \right]\\ =P\left(\beta |\alpha \right)\text{d}\beta -P\left(\beta |-\alpha \right)\left(1+O\left({t}^{-1}\right)\right)\text{d}\beta \end{array}$ (9)

This leads us to (8). The final $\beta$ may have any value as long as it remains positive. The probability of $\varphi \left({x}_{o}\right)$ starting from $\alpha$ and reaching a final positive value without ever changing sign is

${ℙ}^{+}\left(\alpha \right)={\int }_{0}^{\infty }\text{d}\beta {P}^{+}\left(\beta |\alpha \right)$ (10)

We would now calculate (10) for asymptotically large value of t. Under the circumstances the second term on the R.H.S of (6-b) can be neglected. Further $\frac{{\mu }^{2}}{{\sigma }^{2}}\sim \alpha {t}^{-d/2}$ . Hence for $\alpha \ll {t}^{d/2}$ , ${\mu }^{2}/{\sigma }^{2}\ll 1$ . The expression (10) is evaluated using the identity 

$\overline{){\int }_{0}^{\infty }\text{d}x\mathrm{exp}\left(\frac{-{x}^{2}}{4\beta }-\gamma x\right)\text{ }=\sqrt{\pi \beta }\mathrm{exp}\left(\beta {\gamma }^{2}\right)\left[1-erf\left(\gamma \sqrt{\beta }\right)\right]}$ (11)

Evaluation leads to a sum of two terms—one is proportional to ${t}^{-d/4}$ and the other is proportional to ${t}^{-1}$ . Hence we obtain

${ℙ}^{+}\left(\alpha \right)\sim \alpha {t}^{-d/4}$ (12)

for $d\le 4$ . In arriving at the above result the asymptotic expansion of “error function” erf has been used for small argument. Finally, the expression for ${\mathcal{P}}^{+}\left(t\right)$ is obtained by integrating $\alpha$ over a Gaussian distribution.

${\mathcal{P}}^{+}\left(t\right)={\int }_{0}^{\infty }\text{d}\alpha {ℙ}^{+}\left(\alpha \right)Q\left(\alpha \right)$ (13)

where $Q\left(\alpha \right)$ is the Gaussian distribution for initial $\varphi \left({x}_{o},o\right)=\alpha$ with variance k as mentioned at the beginning. If $k\ll {t}^{d/2}$ , it may be concluded from (12) and (13) that ${\mathcal{P}}^{+}\left(t\right)\sim {t}^{-d/4}$ or ${t}^{-1}$ depending on whether $d\le 4$ or not. This gives ${\theta }_{o}=d/4$ or 1.

3. Result and Conclusion

In the previous section, exact calculation has been carried out to determine the probability ${\mathcal{P}}^{+}\left(t\right)$ of the sign of the field $\varphi$ remaining positive throughout an asymtotically large time t. The probability is ${\mathcal{P}}^{+}\left(t\right)\sim {t}^{-d/4}$ . Hence, the persistence exponent is ${\theta }_{o}=d/4$ valid for any arbitrary integer dimension $d\le 4$ . The exponents for $d=1,2,3$ are 0.25, 0.50, 0.75 respectively.

The result may be experimentally verified for a system initially at thermal equilibrium defined by a temperature T. The equilibrium is then disturbed in a suitable manner. The time evolution of the coarse grained temperature at any point satisfies the simple diffusion equation; hence, this time evolution can be studied to find the persistence exponent.

The answer for the exponent ${\theta }_{o}$ obtained in this paper is in disagreement with all the papers cited in the beginning. The first results for the persistence exponent in the case of the diffusion problem were published in   back to back. The papers used a two time correlation function and explicitly applied the approximation (IIA), Independent Interval Approximation, to get to the answer. Application of the two time correlation function is not suitable here and so is IIA which is a Markovian approximation. Further, the papers use Monte Carlo simulation to confirm the result. Monte Carlo method appears to be unsuitable for this problem. Hence all the papers that reproduce the results of   are not expected to give the correct answer. In  , the authors have defined a correlation function $C\left(T\right)$ , just like   , to carry the calculation forward. Let us also consider  where the authors have used Kac Polynomials  to obtain the “exact exponent” in 2d. The answer obtained agrees perfectly with   . In the course of the calculation, they have used that the zero crossing property is

governed by the covariance $c\left(T\right)=\text{sech}\left(\frac{T}{2}\right)$  of the stationary Gaussian

process i.e. the diffusion equation with time redefined. Similarly in  , correlator in time ${F}_{\epsilon }\left(\tau -{\tau }^{\prime }\right)$ has been used in the calculation. The point is that the covariance/correlator/correlation function is a misleading quantity for the problem for reasons mentioned below. The model presented in the paper has randomness only in the initial condition. Once the system starts evolving, there is no further randomness. It evolves in accordance with the kernel in (2). It is encoded in the initial condition when and where the $\varphi$ will flip. The probability of each path is uniquely determined by the probability of initial condition, hence the problem with covariance/correlator. The covariance function imposes stochasticity on the present problem throughout the entire time evolution. We now have a different model with the same correlation function but no unique dependence of the probability of the path on initial condition. It also makes the problem Markovian. Hence, all the previous results are in perfect agreement though the calculated exponent will be different from the actual value. The value of the exponent does not depend on only correlation function, but it depends on other details of the model too. Further, there even appears to be experimental proof  for the results of   . The experimental setup of  does not represent the diffusion model described in this paper. The setup satisfies the approximations of the previous papers and hence the agreement with their result.

Acknowledgements

The author would like to thank CSIR, India for Fellowship during the course of the work (2004) at IACS, India.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

  Majumdar, S.N., Sire, C., Bray, A.J. and Cornell, S.J. (1996) Physical Review Letters, 77, 2867-2870. https://doi.org/10.1103/PhysRevLett.77.2867  Derrida, B., Hakim, V. and Zeitak, R. (1996) Physical Review Letters, 77, 2871-2874. https://doi.org/10.1103/PhysRevLett.77.2871  Ehrhardt, G.C.M.A. and Bray, A.J. (2002) Physical Review Letters, 88, Article ID: 070601. https://doi.org/10.1103/PhysRevLett.88.070601  Hilhorst, H.J. (2000) Physica A: Statistical Mechanics and Its Applications, 277, 124-126. https://doi.org/10.1016/S0378-4371(99)00509-9  Schehr, G. and Majumdar, S.N. (2008) Journal of Statistical Physics, 132, 235-273. https://doi.org/10.1007/s10955-008-9574-3  Poplavskyi, M. and Schehr, G. (2018) Physical Review Letters, 121, Article ID: 150601. https://doi.org/10.1103/PhysRevLett.121.150601  Barbier-Chebbah, A., Benichou, O. and Voituriez, R. (2020) Physical Review E, 102, Article ID: 062115. https://doi.org/10.1103/PhysRevE.102.062115  Aurzada, F. and Simon, T. (2015) Persistence Probabilities and Exponents. In: Lévy Matters V, Springer, Berlin, 183-224. https://doi.org/10.1007/978-3-319-23138-9_3  Dembo, A. and Mukherjee, S. (2015) The Annals of Probability, 43, 85-118. https://doi.org/10.1214/13-AOP852  Wong, G.P., Mair, R.W., Walsworth, R.L. and Cory, D.G. (2001) Physical Review Letters, 86, 4156-4159. https://doi.org/10.1103/PhysRevLett.86.4156  Schwarz, J.M. and Maimon, R. (2001) Physical Review E, 64, Article ID: 016120. https://doi.org/10.1103/PhysRevE.64.016120  Le Doussal, P., Monthus, C. and Fisher, D.S. (1999) Physical Review E, 59, 4795-4840. https://doi.org/10.1103/PhysRevE.59.4795  Krug, J., Kallabis, H., Majumdar, S.N., Cornell, S.J., Bray, A.J. and Sire, C. (1997) Physical Review E, 56, 2702-2712. https://doi.org/10.1103/PhysRevE.56.2702  Le Doussal, P. (2009) Journal of Statistical Mechanics: Theory and Experiment, No. 7, P07032. https://doi.org/10.1088/1742-5468/2009/07/P07032  Constantin, M. and Das Sarma, S. (2005) Physical Review E, 72, Article ID: 051106. https://doi.org/10.1103/PhysRevE.72.051106  Mathews, J. and Walker, R.L. (1970) Mathematical Methods of Physics. Volume 501, WA Benjamin, New York.  Chadrasekhar, S. (1989) Stochastic, Statistical and Hydromagnetic Problems in Physics and Astronomy. Selected Papers Vol. 3.  Gradshteyn, I.S. and Ryzhik, I.M. (2014) Table of Integrals, Series, and Products. Academic Press, Cambridge.  Kac, M. (1943) Bulletin of the American Mathematical Society, 49, 314-320. https://doi.org/10.1090/S0002-9904-1943-07912-8          