Heat Kernel Estimates on Simple Random Walks and On-Diagonal Upper Bounds

Abstract

We primarily provide several estimates for the heat kernel defined on the 2-dimensional simple random walk. Additionally, we offer an estimate for the heat kernel on high-dimensional random walks, demonstrating that the heat kernel in higher dimensions converges rapidly. We also compute the constants involved in the estimate for the 1-dimensional heat kernel. Furthermore, we discuss the general case of on-diagonal estimates for the heat kernel.

Share and Cite:

Zuo, R. , Yan, Y. , Zhu, Z. , Yao, L. and Han, Q. (2024) Heat Kernel Estimates on Simple Random Walks and On-Diagonal Upper Bounds. Journal of Applied Mathematics and Physics, 12, 3613-3625. doi: 10.4236/jamp.2024.1210216.

1. Introduction

The study of heat kernel estimates on graphs and manifolds is fundamental in geometric analysis, with crucial implications for random walks and diffusion processes. This paper presents precise estimates for the heat kernel associated with random walks on graphs of varying dimensions. We focus particularly on the 2-dimensional case, establishing key insights into the long-term behavior of these stochastic processes.

Extending our analysis to higher-dimensional random walks, we find that the heat kernel converges rapidly, marking a significant departure from lower-dimensional dynamics. This observation builds on established results from Barlow et al. [1] [2] and Grigor’yan [3] [4], which address the complex behavior of heat kernels across different geometries.

Additionally, we compute constants related to the 1-dimensional heat kernel estimates, enhancing our understanding of how dimension affects kernel decay rates. This work complements the findings of Hambly and Kumagai [5], who examined the influence of space complexity on heat kernel behavior.

We also explore on-diagonal estimates, which are essential for characterizing short-time asymptotics of heat kernels. These estimates, informed by results from Coulhon and Grigor’an [6] [7], provide a cohesive framework for analyzing random walks on diverse graph structures.

An outline of this paper is as follows. In Section 2, we list the basic definitions of graphs and some lemmas that play an important role in the proof in Section 3, and we also list some estimates that have been proved in [3] [8] and will be improved in Section 3. In Section 3.1, we first talk about the precise constant in the estimate of a 1-dimensional simple random walk. Then, we give several estimates of a 2-dimensional simple random walk, the method from which can generalize to higher dimensional cases. In Section 3.2, we provide an estimate for the upper bound of the heat kernel with some restrictions on graphs.

2. Preliminaries

We introduce some basic definitions and lemmas that will be used later. [3] contains the details for this section.

Definition 2.1 (Weight)

Let ( V , E ) be a countable, undirected graph. A collection of non-negative functions { μ x y } x , y V , defined on V × V , is called a weight on ( V , E ) if it satisfies μ x y = μ y x and μ x y > 0 if and only if x ~ y . The pair ( V , μ ) is then referred to as a weighted graph. We define the weight of any vertex x as μ ( x ) = y V , y ~ x μ x y .

Definition 2.2 (Transition function)

Let ( V , μ ) be an undirected, countable graph, and let x , y be two arbitrary vertices in V . Define P ( x , y ) = μ x y μ ( x ) , which forms a Markov kernel. The transition function is then defined by

P n ( x , y ) = z V P n 1 ( x , z ) P ( z , y ) .

From this definition, it follows directly that for any weighted graph ( V , μ ) , we have

P ( x , y ) μ ( x ) = P ( y , x ) μ ( y ) ,

where x , y V . Consequently, for any positive integer n , we also have

P n ( x , y ) μ ( x ) = P n ( y , x ) μ ( y ) .

Definition 2.3 (Laplace operator)

Let ( V , μ ) be an undirected, countable graph. The Markov operator is defined by

P f ( x ) = y P ( x , y ) f ( y ) ,

where f is a function on V and x is any vertex.

Given a finite subset Ω V , the Dirichlet Laplace operator on Ω is defined as

L Ω f ( x ) = f ( x ) y P ( x , y ) f ( y ) ,

where f is a function on Ω, and f ( y ) is set to zero whenever y Ω .

Definition 2.4 (Heat kernel)

Let ( V , μ ) be a weighted graph. The heat kernel of ( V , μ ) is defined as

p n ( x , y ) = P n ( x , y ) μ ( y ) ,

where x , y V . Furthermore, the relation

p n ( x , y ) μ ( x ) = p n ( y , x ) μ ( y )

holds.

Lemma 2.5 (Stirling’s formula)

For any positive integer n , the factorial of n is given by:

n ! = 2 π n n n e n + ξ n ,

where ξ n is some positive number such that 1 12 n + 1 ξ n 1 12 n . Indeed, this implies the famous estimate of n ! as n :

n ! ~ 2 π n ( n e ) n .

Theorem 2.6 (One-dimensional random walk)

Let ( V , μ ) be equipped with a simple weight. We have p n ( x , y ) = p n ( x z , y z ) for all integers x , y and z . Also, we have

p n ( 0 , x ) = { 1 2 n + 1 ( n n + x 2 ) , if | x | n and x n mod 2 , 0 , otherwise .

Theorems 2.7 and 2.8 have been established in the literature ([3]), demonstrating the behavior of the heat kernel under specific conditions.

Theorem 2.7 (Estimation of p n ( 0 , x ) , [3])

For every positive integer n and for all x satisfying | x | n and x n mod 2 , the following estimate holds:

C 2 n e ( log 2 ) x 2 n p n ( 0 , x ) C 1 n e x 2 2 n ,

where C 1 , C 2 are some positive constants.

Theorem 2.8 (On-diagonal upper bounds for the heat kernel)

Let ( V , μ ) be an infinite weighted connected graph that satisfies the following conditions:

1 μ x y M for all x ~ y , deg ( x ) D for all x V , (1)

for some constants M and D . Furthermore, if ( V , μ ) satisfies the Faber-Krahn inequality with the function

Λ ( s ) = c s 1 α ,

for constants α , c > 0 , then the following estimate holds:

p n ( x , y ) C n α ,

for all x , y V and n 1 , where C is a constant depending on α , c , M , D .

3. Some Estimates for Heat Kernel

3.1. Heat Kernel Estimates on Simple Random Walk

Firstly, we will compute the exact values of the constants C 1 and C 2 in Theorem 2.7, provide a more accurate estimate of p n ( 0 , x ) for the case of a one-dimensional simple random walk.

Theorem 3.1 (Better estimate of p n ( 0 , x ) )

For every positive integer n and for all x such that | x | n and x n mod 2 , the following estimate holds:

e 7 78 2 π ( n + 1 ) e ( log 2 ) x 2 n + 1 p n ( 0 , x ) e 8 π ( n + 1 ) e x 2 2 ( n + 1 ) .

Proof. From Lemma 2.5, we have

n ! = ( n + 1 ) ! n + 1 = 2 π e e ξ n + 1 ( n + 1 ) n + 1 2 e n .

Assume that m is even and set n = m 2 , we get

( m 2 ) ! = 2 π e e ξ m 2 + 1 ( m 2 + 1 ) m + 1 2 e m 2 = π e e ξ m 2 + 1 ( m + 2 ) m + 1 2 ( 2 e ) m 2 .

Note that

2 ( m + 1 ) m + 1 ( m + 2 ) m + 1 e ( m + 1 ) m + 1 , (2)

indicating that

2 π e e ξ m 2 + 1 ( m + 1 ) m + 1 2 ( 2 e ) m 2 ( m 2 ) ! π e e ξ m 2 + 1 ( m + 1 ) m + 1 2 ( 2 e ) m 2 . (3)

Recall that under the given condition, we have p n ( 0 , x ) = 1 2 n + 1 ( n n + x 2 ) , then from the first inequality in Equation (3) we obtain:

p n ( 0 , x ) = 1 2 n + 1 n ! ( n + x 2 ) ! ( n x 2 ) ! δ e 8 π ( n + 1 ) n + 1 2 ( n + x + 1 ) n + x + 1 2 ( n x + 1 ) n x + 1 2 ,

where δ = e ξ n + 1 e ξ n + x 2 + 1 e ξ n x 2 + 1 . Firstly, we estimate the upper bound of p n ( 0 , x ) . Note that | x | n , then we have

p n ( 0 , x ) = δ e 8 π 1 n + 1 ( 1 + x n + 1 ) n + 1 + x 2 ( 1 x n + 1 ) n + 1 x 2 = δ e 8 π 1 N 1 ( 1 + x N ) N + x 2 ( 1 x N ) N x 2 ,

where N = n + 1 . Apply the Taylor formula, we get

ln ( ( 1 + x N ) N + x 2 ( 1 x N ) N x 2 ) = 1 2 ( ln ( 1 + x N ) N + x + ln ( 1 x N ) N x ) = 1 1 2 N x 2 + 1 3 4 N 3 x 4 + 1 5 6 N 5 x 6 + = x 2 N ( 1 2 + 1 12 ( x N ) 2 + 1 30 ( x N ) 4 + ) .

Thus, we have the estimate of the upper bound

p n ( 0 , x ) δ e 8 π 1 n + 1 e x 2 2 ( n + 1 ) . (4)

We have

δ e 1 12 ( n + 1 ) 1 12 ( n + x 2 + 1 ) + 1 1 12 ( n x 2 + 1 ) + 1 .

A simple computation verifies that

δ e 1 12 ( n + 1 ) 1 12 ( n + 0 2 + 1 ) + 1 1 12 ( n 0 2 + 1 ) + 1 = e 1 12 n + 12 2 6 n + 13 1.

Consequently, we deduce

p n ( 0 , x ) e 8 π 1 n + 1 e x 2 2 ( n + 1 ) .

Now we estimate the lower bound of p n ( 0 , x ) . From the second inequality in Equation (3), we have

p n ( 0 , x ) = 1 2 n + 1 n ! ( n + x 2 ) ! ( n x 2 ) ! δ 1 2 π ( n + 1 ) n + 1 2 ( n + x + 1 ) n + x + 1 2 ( n x + 1 ) n x + 1 2 = δ 1 2 π 1 n + 1 ( 1 + x n + 1 ) n + 1 + x 2 ( 1 x n + 1 ) n + 1 x 2 = δ 1 2 π 1 N 1 ( 1 + x N ) N + x 2 ( 1 x N ) N x 2 .

Note that

1 2 + 1 12 ( x N ) 2 + 1 30 ( x N ) 4 + < 1 1 2 + 1 3 4 + 1 5 6 = 1 1 2 + 1 3 1 4 + 1 5 1 6 + = log 2.

In addition, we can find a lower bound of δ :

δ e 1 12 ( n + 1 ) + 1 1 12 ( n + x 2 + 1 ) 1 12 ( n x 2 + 1 ) e 1 12 ( n + 1 ) + 1 1 12 ( n + n 2 + 1 ) 1 12 ( n n 2 + 1 ) e 1 12 ( n + 1 ) + 1 1 12 n + 12 1 12 e 7 78 .

Thus we obtain

p n ( 0 , x ) e 7 78 2 π 1 n + 1 e ( log 2 ) x 2 n + 1 .

Also, we consider the case of a two-dimensional simple random walk and give the corresponding estimate of p n ( ( 0 , 0 ) , ( x , y ) ) . We omit the derivation of its expression, which is

p n ( ( 0 , 0 ) , ( x , y ) ) = { 1 4 n + 1 j = 0 k n ! ( | x | + j ) ! j ! ( | y | + k j ) ! ( k j ) ! , if n | x | + | y | and | x | + | y | n mod 2 , 0 , otherwise ,

where k = n | x | | y | 2 .

From Theorem 3.1, we have the following derivation:

p n ( ( 0 , 0 ) , ( x , y ) ) = 1 4 n + 1 j = 0 k ( | x | + 2 j ) ! ( | x | + j ) ! j ! ( | y | + 2 k 2 j ) ! ( | y | + k j ) ! ( k j ) ! n ! ( | x | + 2 j ) ! ( | y | + 2 k 2 j ) ! = 1 2 n j = 0 k 1 2 | x | + 2 j + 1 ( | x | + 2 j | x | + j ) 1 2 | y | + 2 k 2 j + 1 ( | y | + 2 k 2 j | y | + k j ) ( n | x | + 2 j ) e 2 8 π 1 2 n j = 0 k e x 2 2 ( | x | + 2 j + 1 ) | x | + 2 j + 1 e y 2 2 ( | y | + 2 k 2 j + 1 ) | y | + 2 k 2 j + 1 ( n | x | + 2 j ) .

We denote β = min { | x | , | y | } .

Theorem 3.2

If β C n for some positive constant C , then we have

p n ( ( 0 , 0 ) , ( x , y ) ) e 2 8 π C n e x 2 + y 2 2 n .

Proof. Indeed, we have

p n ( ( 0 , 0 ) , ( x , y ) ) e 2 8 π 1 2 n 1 | x y | e x 2 + y 2 2 ( n + 1 ) j = 0 k ( n | x | + 2 j ) e 2 8 π C 1 n e x 2 + y 2 2 ( n + 1 ) 1 2 n i = 0 n ( n i ) = e 2 8 π C n e x 2 + y 2 2 ( n + 1 ) .

This method is easy to generalize to the high-dimensional case. We analogously define the heat kernel with dimension m 2 as follows.

p n ( ( 0 , , 0 ) , ( x 1 , , x m ) ) = { 1 2 m ( n + 1 ) j 1 , , j m 0 j 1 + + j m = k n ! ( | x 1 | + j 1 ) ! j 1 ! ( | x m | + j m ) ! j m ! , if n > | x 1 | + + | x m | and | x 1 | + + | x m | 0 mod 2 , 0 , otherwise ,

where k = n | x 1 | | x m | 2 . Let α = min { | x 1 | , , | x m | } .

Theorem 3.3

We have

p n ( ( 0 , , 0 ) , ( x 1 , , x m ) ) e m ( 8 π ) m 2 ( m 2 m 1 ) n e x 1 2 + + x m 2 2 ( n + 1 ) .

Proof. Similarly, we have

Now, we return to the case of the two-dimensional random walk. We will see an estimate of p n ( 0 , x ) if β is large enough.

Theorem 3.4

If β n + 2 2 , then

p n ( ( 0 , 0 ) , ( x , y ) ) e 2 4 π ( n + 2 ) e 2 β 2 n + 2 .

Proof. Define

F ( t ) = 1 ( t + 1 ) ( n t + 1 ) e β 2 2 ( 1 t + 1 + 1 n t + 1 )

for 0 < t n , then

F ( t ) = 1 2 [ ( t + 1 ) ( n t + 1 ) ] 5 2 e β 2 2 ( 1 t + 1 + 1 n t + 1 ) ( t 2 n t + ( β 2 1 ) n + 2 β 2 1 ) ( n 2 t ) .

Note that β n + 2 2 , then

Δ = n 2 4 ( ( β 2 1 ) n + 2 β 2 1 ) = ( n + 2 ) ( n + 2 4 β 2 ) 0 ,

indicating that F ( t ) reaches its maximum at t = n 2 . Therefore, we have

p n ( 0 , x ) e 2 8 π 2 n 1 n 2 + 1 e β 2 2 2 n 2 + 1 i = 0 n ( n i ) e 2 4 π ( n + 2 ) e 2 β 2 n + 2 .

Theorem 3.5

p n ( ( 0 , 0 ) , ( x , y ) ) e 2 8 π n + 1 e x 2 + y 2 2 ( n + 1 )

Proof. Define

G ( t ) = 1 ( t + 1 ) ( n t + 1 ) , 0 t n .

Then

G ' ( t ) = 1 2 [ ( t + 1 ) ( n + 1 t ) ] 3 2 ( 2 t n ) ,

and hence G ( t ) reaches its maximum at t = 0 or t = n . Therefore, we have

p n ( ( 0 , 0 ) , ( x , y ) ) e 2 8 π 2 n e x 2 + y 2 2 ( n + 1 ) 1 n + 1 i = 0 n ( n i ) e 2 8 π n + 1 e x 2 + y 2 2 ( n + 1 ) .

3.2. On-Diagonal Upper Bounds for the Heat Kernel

Now we consider some more general situations. If some constants can dominate the weight and the degree, then we have an estimate for the heat kernel (See Theorem 2.8). We wonder now if we have some estimates when μ x y and deg ( x ) are not uniformly bounded.

We introduce some notations as follows. The support of a function f on V is defined to be supp ( f ) = { x V | f ( x ) 0 } . Let F be the set of all functions defined on V that have finite support. For f , g F , we define the inner product ( f , g ) = x V f ( x ) g ( x ) μ ( x ) .

Here are some statements in the proof of Theorem 2.8, which will be used in our proof later. We will list all these statements as lemmas without covering the details. See [3] [8] for the proof of these lemmas.

Lemma 3.6 If f F , then P f F and ( P f , 1 ) = ( f , 1 ) .

Lemma 3.7 Let f F be a non-negative function, and set Q ( f , f ) = ( f , f ) ( P f , P f ) . For given s 0 , we define

Ω s = U 1 ( supp ( f s ) + ) ,

where U 1 ( ) denotes the 1-neighborhood and ( f s ) + = max { f s , 0 } . Then

Q ( f , f ) λ 1 ( Ω s ) ( ( f , f ) 2 s ( f , 1 ) ) .

Lemma 3.8 If { b n } n = 0 is a sequence of positive real numbers satisfying that

b n b n + 1 c n b n 1 + 1 α ,

then we have

b n + 1 1 / α b n 1 / α c n α .

Lemma 3.9 Let p n ( x , y ) be the heat kernel on a locally finite weighted graph ( V , μ ) . Then for any x , y V and n , m , we have [(a)]

1) p 2 n ( x , x ) = z V p n 2 ( x , z ) μ ( z ) ;

2) p n + m ( x + y ) ( p 2 n ( x , x ) p 2 m ( y , y ) ) 1 2 .

Theorem 3.10

Now, instead of assuming uniform bounds on the weight μ x y and degree deg ( x ) for vertices x , y V as in condition Equation (1), we replace it with the following:

1 μ x y M ( x ) for all x ~ y , deg ( x ) D ( x ) for all x V , (5)

where M ( ) and D ( ) are non-negative, finite functions. Additionally, assume there exist non-decreasing functions h ( ) and k ( ) such that for any x , y V , the following holds:

| M ( x ) M ( y ) | h ( d ( x , y ) ) , | D ( x ) D ( y ) | k ( d ( x , y ) ) , (6)

where d ( x , y ) denotes the distance between vertices x and y . All other conditions remain the same as in Theorem 2.7.

Under these assumptions, the following estimate holds:

p n ( x , y ) inf z V ( 2 α c k = 0 N C z ( k ) ) α ,

where n 2 , N = n 2 1 , and

C z ( n ) = [ 4 ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) ( D ( z ) + k ( n ) + 1 ) ] 1 α .

Proof. From Equation (5), we deduce that for any x V we have

1 μ ( x ) = y ~ x μ x y M ( x ) D ( x ) .

Given any non-empty finite set A V , the total weight of A can be estimated:

| A | μ ( A ) x A M ( x ) D ( x ) x A ( M ( z ) + h ( ecc A ( z ) ) ) ( D ( z ) + k ( ecc A ( z ) ) ) = ( M ( z ) + h ( ecc A ( z ) ) ) ( D ( z ) + k ( ecc A ( z ) ) ) | A | , (7)

where z is an arbitrary vertex in V , and ecc ( z ) = max y A d ( z , y ) , which denotes the eccentricity of z in A .

Let us fix a vertex z V and define f 0 : = 1 μ ( z ) 1 z . It is clear that f 0 has finite support and ( f 0 , 1 ) = 1 . We then define the sequence { f n } inductively by setting f n + 1 = P f n . We deduce by Lemma 3.6 that f n F and ( f n , 1 ) = 1 . Set s = 1 4 ( f n , f n ) for each n in Lemma 3.7, we obtain

Q ( f n , f n ) 1 2 λ 1 ( Ω s ) ( f n , f n ) , (8)

where Q ( f n , f n ) : = ( f n , f n ) ( P f n , P f n ) and Ω s : = U 1 ( supp ( f n s ) + ) .

Also, we have

μ ( supp ( f n s ) + ) = μ ( { x V | f n ( x ) s } ) 1 s x V f n ( x ) μ ( x ) = 1 s ( f n , 1 ) = 1 s . (9)

Now we claim that f n ( x ) = p n ( x , z ) for n = 0 , 1 , .

In fact, f 0 ( x ) = 1 μ ( z ) 1 z = P 0 ( x , z ) μ ( z ) = p 0 ( x , z ) . Suppose that the statement is true for n . Then we have f n + 1 ( x ) = y V P ( x , y ) f n ( y ) = y V p 1 ( x , y ) p n ( y , z ) μ ( y ) = p n + 1 ( x , z ) , and our claim has been shown by induction.

Therefore, we deduce from the definition of eccentricity that

ecc supp { f n s } + ( z ) n and ecc Ω s ( z ) n + 1. (10)

Hence, from Equations (5) (6) (7) (9) (10), we obtain

μ ( Ω s ) ( M ( z ) + h ( ecc Ω s ( z ) ) ) ( D ( z ) + k ( ecc Ω s ( z ) ) ) | Ω s | ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) | Ω s | ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) x supp ( f n s ) + | B 1 ( x ) | ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) ( D ( z ) + k ( n ) + 1 ) | supp ( f n s ) + | 1 s ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) ( D ( z ) + k ( n ) + 1 ) = 4 ( f n , f n ) ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) ( D ( z ) + k ( n ) + 1 ) ,

where B 1 ( x ) denotes the set consisting of all the vertices y such that d ( x , y ) 1 . Using the Faber-Krahn inequality, we obtain

λ 1 ( Ω s ) c μ ( Ω s ) 1 α c ( f n , f n ) 1 α C z ( n ) , (11)

where C z ( n ) = ( 4 ( M ( z ) + h ( n + 1 ) ) ( D ( z ) + k ( n + 1 ) ) ( D ( z ) + k ( n ) + 1 ) ) 1 α . Thus, from Equations (8) (11), we deduce

Q ( f n , f n ) c 2 ( f n , f n ) 1 + 1 α C z ( n ) .

That is

( f n , f n ) ( f n + 1 , f n + 1 ) c 2 ( f n , f n ) 1 + 1 α C z ( n ) . (12)

Then, Equation (12) and Lemma 3.8 imply that

( f n + 1 , f n + 1 ) 1 α ( f n , f n ) 1 α c 2 α C z ( n ) .

Hence, we have

( f n , f n ) 1 α k = 0 n 1 ( ( f k + 1 , f k + 1 ) 1 α ( f k , f k ) 1 α ) = c 2 α k = 0 n 1 C z ( k ) ,

and therefore

( f n , f n ) ( 2 α c k = 0 n 1 C z ( k ) ) α . (13)

We note from Lemma 3.9 that

( f n , f n ) = x V f n 2 ( x ) μ ( x ) = x V p n 2 ( x , z ) μ ( x ) = p 2 n ( z , z ) . (14)

For the general heat kernel p n ( x , z ) , we use Equations (13) (14) and Lemma 3.9 to derive the following estimate:

p k + l ( x , y ) ( p 2 k ( x , x ) p 2 l ( y , y ) ) 1 2 ( 2 α c ) α ( 1 i = 0 k 1 C z ( i ) j = 0 l 1 C z ( j ) ) α 2 ,

for x , y V and positive integers j , l . We choose k = l = n 2 if n is even or k = n + 1 2 and l = n 1 2 if n is odd. Both cases lead to the same estimate in the theorem. □

Remark. Note that if h ( n ) = k ( n ) c o n s t a n t , then the estimate implies

p n ( x , y ) = O ( n α ) .

In another case, let h ( n ) c 0 log α ( n + 1 ) and k ( n ) c o s n t a n t . Then we have C z ( n ) c 1 log n . For sufficiently large n , we have the following estimation considering the theorem 3.10:

together with the approximation

n = 2 N 1 log n 2 N 1 log x d x = Li ( n ) Li ( 2 ) ,

where Li ( x ) ~ x log x denotes the logarithmic integral function, which implies the estimation

p n ( x , y ) c 3 ( log n n ) α .

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Barlow, M.T. and Perkins, E.A. (1989) Symmetric Markov Chains in Zd: How Fast Can They Move? Probability Theory and Related Fields, 82, 95-108.[CrossRef
[2] Barlow, M.T., Bass, R.F. and Kumagai, T. (2008) Parabolic Harnack Inequality and Heat Kernel Estimates for Random Walks with Long Range Jumps. Mathematische Zeitschrift, 261, 297-320.[CrossRef
[3] Grigor’yan, A. (2018) Introduction to Analysis on Graph. University Lecture Series 71, AMS.
[4] Grigor’yan, A. and Telcs, A. (2002) Harnack Inequalities and Sub-Gaussian Estimates for Random Walks. Mathematische Annalen, 324, 521-556.[CrossRef
[5] Hambly, B.M. and Kumagai, T. (2004) Heat Kernel Estimates and Law of the Iterated Logarithm for Symmetric Random Walks on Fractal Graphs. In: Discrete Geometric Analysis, Contemp. Math. 347, American Mathematical Society, 153-172.
[6] Coulhon, T. and Grigor’yan, A. (1997) On-Diagonal Lower Bounds for Heat Kernels on Non-Compact Manifolds and Markov Chains. Duke Mathematical Journal, 89, 133-199.
[7] Coulhon, T. and Grigoryan, A. (1998) Random Walks on Graphs with Regular Volume Growth. Geometric And Functional Analysis, 8, 656-701.[CrossRef
[8] Telcs, A. and Grigor’yan, A. (2001) Sub-Gaussian Estimates of Heat Kernels on Infinite Graphs. Duke Mathematical Journal, 109, 451-510.[CrossRef

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.