Non-Backtracking Random Walks and a Weighted Ihara’s Theorem

We study the mixing rate of non-backtracking random walks on graphs by looking at non-backtracking walks as walks on the directed edges of a graph. A result known as Ihara’s Theorem relates the adjacency matrix of a graph to a matrix related to non-backtracking walks on the directed edges. We prove a weighted version of Ihara’s Theorem which relates the transition probability matrix of a non-backtracking walk to the transition matrix for the usual random walk. This allows us to determine the spectrum of the transition probability matrix of a non-backtracking random walk in the case of regular graphs and biregular graphs. As a corollary, we obtain a result of Alon et. al. in [1] that in most cases, a non-backtracking random walk on a regular graph has a faster mixing rate than the usual random walk. In addition, we obtain an analogous result for biregular graphs.


Introduction
A random walk on a graph G is a random process on the vertices of G in which, at each step in the walk, we choose uniformly at random among the neighbors of the current vertex.Random walks have been studied extensively, and are used in a variety of algorithms involving graphs.For a comprehensive survey on random walks on graphs, see [13], and for applications of spectral techniques to random walk theory, see [5].Random walks on graphs have the useful property that given any initial distribution on the vertex set, the random walk converges to a unique stationary distribution as long as the graph is connected and not bipartite.The speed at which this convergence takes place is referred to as the mixing rate of the random walk.In a graph where a random walk has a fast mixing rate, vertices can be sampled quickly using this random process, making this a useful tool in theoretical computer science.
A non-backtracking random walk on a graph is a random walk with the added condition that, on a given step, we are not allowed to return to the vertex visited on the previous step.Viewed as a walk on vertices, a non-backtracking random walk loses the property of being a Markov chain, making its analysis somewhat more difficult.However, their study has received increased interest in recent years.Recently, Angel, Friedman, and Hoory [2] studied non-backtracking walks on the universal cover of a graph.Fitzner and Hofstad [7] studied the convergence of non-backtracking random walks on lattices and tori.Krzakala et.al. [11] use a matrix related to non-backtracking walks to study spectral clustering algorithms.Most pertinent to the current paper, Alon, Benjamini, Lubetzky, and Sodin [1] studied the mixing rate of a non-backtracking walk for regular graphs.In particular, they prove that in most cases, a non-backtracking random walk on a regular graph has a faster mixing rate than a random walk allowing backtracking.
In this paper, we study the mixing rate for a non-backtracking random walk, with the goal of removing the condition of regularity needed in the results of Alon et. al. in [1].We take a different approach than Alon et.al. by looking at the non-backtracking walk as a walk along directed edges of a graph, as is done in [2].This allows us to turn the non-backtracking random walk into a Markov chain, but on a larger state space, which in turn allows us to determine the stationary distribution to which a non-backtracking walk converges for a general graph, whether or not it is regular.In the case of regular graphs, our approach allows us to compute the spectrum of the transition probability matrix for a non-backtracking random walk, expressed in terms of the eigenvalues of the adjacency matrix.This allows for easy comparison of the mixing rates of a non-backtracking random walk, and an ordinary random walk.As a corollary, this gives us an alternate proof of the result in [1] for regular graphs.Our approach gives more information than the approach in [1], in that the full spectrum of the transition probability matrix is given.In addition, we are able to compute the spectrum of the non-backtracking transition probability matrix for biregular graphs.As a corollary, we generalize the result in [1] for regular graphs to an analogous result for biregular graphs.
A key component in our proof is a weighted version of a result known as Ihara's Theorem, also called the Ihara zeta identity, which relates an operator indexed by the directed edge set of a graph to an operator indexed by the vertex set of the graph.Ihara's Theorem was first considered in the study of number theoretic zeta functions on graphs, and was first proved for regular graphs by Ihara in 1966 (see [9]).Numerous other proofs have been given since, along with generalizations to irregular graphs, by Hashimoto ( [8], 1989), Bass ([3], 1992), Stark and Terras ([15], 1996), Kotani and Sunada ([10], 2000), and others.We will give an elementary proof of Ihara's Theorem that, to our knowledge, is original.In addition, we follow ideas similar to those in [10] to obtain a version of Ihara's Theorem with weights that allows us to study the relevant transition probability matrices for random walks.
The remainder of this paper is organized as follows.In section 2, we give the necessary background and preliminary information on random walks, and develop the corresponding theory for non-backtracking walks, including the convergence of a non-backtracking walk to a stationary distribution for a general graph.We accomplish this via walks on the directed edges of a graph.We also investigate bounds obtained from the normalized Laplacian for a directed graph.We also give the relevant background on Ihara's Theorem, and a new elementary proof.In section 3, we prove our weighted version of Ihara's formula.Finally, in section 4, we use this formula to obtain the spectrum of the transition probability matrix for a non-backtracking random walk for regular and biregular graphs.This gives a new proof of the result of Alon et.al. concerning the mixing rate of a non-backtracking random walk on a regular graph, and generalizes this result to the class of biregular graphs.

Random walks
Throughout this paper, we will let G = (V, E) denote a graph with vertex set V and (undirected) edge set E, and we will let n = |V | and m = |E|.A random walk on a graph is a sequence (v 0 , v 1 , ..., v k ) of vertices v i ∈ V where v i is chosen uniformly at random among the neighbors of v i−1 .Random walks on graphs are well-studied, and considerable literature exists about them.See in particular [5] and [13] for good surveys, especially in the use of spectral techniques in studying random walks on graphs.
The adjacency matrix A of G is the n × n matrix with rows and columns indexed by V given by It is a well-known fact that the (u, v) entry of A k is the number of walks of length k starting at vertex u and ending at vertex v. Define D to be the n × n diagonal matrix with rows and columns indexed by V with D(v, v) = d v , where d v denotes the degree of vertex v.A random walk on a graph G is a Markov process with transition probability matrix P = D −1 A, so Given any starting probability distribution f 0 on the vertex set V , the resulting expected distribution f k after applying k random walk steps is given by f k = f 0 P k .Here we are considering f 0 and f k as row vectors in R n .Note that, in general, P is not symmetric for an irregular graph, but is similar to the symmetric matrix D −1/2 AD −1/2 .Thus, the eigenvalues of P are real, and if we order them as µ 1 ≥ µ 2 ≥ • • • ≥ µ n , then it is easy to see that µ 1 = 1 with eigenvector 1, and µ n ≥ −1.By Perron-Frobenius theory, if the matrix P is irreducible, then we have that µ 2 < 1, and if P is aperiodic, then µ n > −1.The matrix P being irreducible and aperiodic corresponds to the graph G being connected and non-bipartite.
The stationary distribution for a random walk on G is given by .
The stationary distribution has the important property the πP = π, so that a random walk with initial distribution π will stay at π at each step.An important fact about the stationary distribution is that if G is a connected graph that is not bipartite, then for any initial distribution f 0 on V (G), we have for all v (see [13]).Knowing that a random walk will converge to some stationary distribution, a fundamental question to consider is to determine how quickly the random walk approaches the stationary distribution, or in other words, to determine the mixing rate.In order to make this question precise, we need to consider how to measure the distance between two distribution vectors.
Several measures for defining the mixing rate of a random walk have been given (see [5]).Classically, the mixing rate is defined in terms of the pointwise distance (see [13]).That is, the mixing rate is Note that a small mixing rate corresponds to fast mixing.Alternatively, the mixing rate can be considered in terms of the standard L 2 (Euclidean) norm, the relative pointwise distance, the total variation distance, or the χ-squared distance.In general, these measures can yield different distances, but spectral bounds on the mixing rate are essentially the same for each.See [5] for a detailed comparison of each.For our purposes, we will primarily be concerned with the χ-squared distance, which will be defined below.
The mixing rate of a random walk is directly related to the eigenvalues of P .
Theorem 1 (Corollary 5.2 of [13]).Let G be a connected non-bipartite graph with transition probability matrix P , and let the eigenvalues of P be Then the mixing rate is max{µ 2 , |µ n |}.
Thus, the smaller the eigenvalues of P , the faster the random walk converges to its stationary distribution.

Non-backtracking random walks
A non-backtracking random walk on G is a sequence (v 0 , v 1 , ..., v k ) of vertices v i ∈ V where v i+1 is chosen randomly among the neighbors of v i such that v i+1 = v i−1 for i = 1, ..., k − 1.In other words, a nonbacktracking random walk is a random walk in which a step is not allowed to go back to the immediately previous state.A non-backtracking random walk on a graph is not a Markov chain since, in any given state, we need to remember the previous step in order to take the next step.In order for this to be well-defined, we assume throughout the remainder of the paper that the minimim degree of G is at least 2. Define P (k) to be the n × n transition probability matrix for a k-step non-backtracking random walk on the vertices.That is P (k) (u, v) is the probability that a non-backtracking random walk starting at vertex u ends up at vertex v after k steps.Note that P (1) = P , where P = D −1 A is the transition matrix for an ordinary random walk on G.However, P (k) is not simply P k since a non-backtracking random walk is not a Markov chain.
This process can be turned into a Markov chain, however, by changing the state space from the vertices of the graph to the directed edges of the graph.That is, replace each edge in E with two directed edges (one in each direction).Then the non-backtracking random walk is a sequence of directed edges (e 1 , e 2 , • • • , e k ) where if e i = (v j , v k ), and e i+1 = (v r , v s ) then v k = v r and v s = v j .That is, the non-backtracking condition restricts the walk from moving from an edge to the edge going in the opposite direction.Denote the set of directed edges by − → E .The transition probability matrix for this process we will call P .Observe that Note that P is a 2m × 2m matrix.Note also that P k is the transition matrix for a walk with k steps on the directed edges.
Lemma 1.Given any graph G, the matrix P as defined above is doubly stochastic.
Proof.Observe first that the rows of the matrix P sum to 1, as it is a transition probability matrix.In addition, the columns of P sum to 1.To see this, consider the column indexed by the directed edge (u, v).
The entry of this column corresponding to the row indexed by (x, y) Otherwise, the entry is 0. Thus the column sum is Define the distribution π : where 1 is the vector of length 2m with each entry equal to 1.

Lemma 2. Let f0 :
− → E → R be any distribution on the directed edges of G.If the matrix P is irreducible and aperiodic, then Proof.It follows from Lemma 1 that π is a stationary distribution for P .This follows because, since the columns of P sum to 1, we have π P = π.
Therefore, if the sequence f0 P k converges, it must converge to π.Now, P being irreducible and aperiodic are precisely the conditions for this to converge.
Let f be a probability distribution on the vertices of G. Then f can be turned into a distribution f on Conversely, given a distribution g on − → E , define a distribution g on the vertices by Thus, given any starting distribution f 0 : V → R on the vertex set of G, we can compute the distribution after k non-backtracking random walk steps f k : V → R as follows.First compute the distribution f0 on the directed edges as above, then compute fk = f0 P k , then f k is given by f k (u) = v∼u fk (u, v).The following proposition tells us that this converges to the same stationary distribution as an ordinary random walk on a graph.Theorem 2. Given a graph G and a starting distribution f 0 : V → R on the vertices of G, define f k = f 0 P (k) to be the distribution on the vertices after k non-backtracking random walk steps.Define the distribution π : V → R by π(v) = dv vol(G) (note that this is the stationary distribution for an ordinary random walk on G).Then if the matrix P is irreducible and aperiodic, then for any starting distribution f 0 on V , we have Proof.As described above, take the distribution f 0 on vertices to the corresponding distribution f0 on directed edges.Then define fk = f0 P k .Then by Lemma 2, fk converges to π.Now π = 1 vol G , and observe that So pulling the distribution π on directed edges back to a distribution on the vertices yields π.Thus the result follows.
Definition 1.The χ-squared distance for measuring convergence of a random walk is defined by be the eigenvalues of P .Then the convergence rate for the nonbacktracking random walk with respect to the χ-squared distance is bounded above by max i =1 |µ i |.
Proof.We have Observe that χ u − π is orthogonal π, which is the eigenvector for µ 1 , so we see that

Non-backtracking Walks as Walks on a Directed Graph
The transition probability matrix P for the walk on directed edges can be thought of as a transition matrix for a random walk on a directed line graph of the graph G.In this way, theory for random walks on directed graphs can be applied to analyze non-backtracking random walks.Random walks on directed graphs have been studied by Chung in [4] by way of a directed version of the normalized graph Laplacian matrix.In [4], the Laplacian for a directed graph is defined as follows.Let P be the transition probability matrix for a random walk on the directed graph, and let φ be its Perron vector, that is, φP = φ.Then let Φ be the diagonal matrix with the entries of φ along the diagonal.Then the Laplacian for the directed graph is defined as This produces a symmetric matrix that thus has real eigenvalues.Those eigenvalues are then related to the convergence rate of a random walk on the directed graph.In particular, the convergence rate is bounded above by 2λ −1 1 (− log min x φ(x)), where λ 1 is the second smallest eigenvalue of L (see Theorem 7 of [4]).Applying this now to non-backtracking random walks, define P as before.Then as seen above, φ is the constant vector with φ(v) = 1/ vol(G) for all v. Then the directed Laplacian for a non-backtracking walk becomes L = I 2m = P + P *

.
Then Theorem 1 of [4], applied to the matrix L as defined, gives the Rayleigh quotient for a function f : From this it is clear that L is positive semidefinite with smallest eigenvalue are the eigenvalues of L, then Theorem 7 from [4] implies that the convergence rate for the corresponding random walk is bounded above by We remark that for an ordinary random walk on an undirected graph G, the convergence rate is also on the order of 1/λ 1 (L), where L now denotes the normalized Laplacian of the undirected graph G.Note that where denotes the Rayleigh quotient with respect to L, and with R given above.
The following result shows that the Laplacian bound does not give an improvement for non-backtracking random walks over ordinary random walks.
Proposition 1.Let G be any graph, and let L be the normalized graph Laplacian and L the non-backtracking Laplacian defined above.Then we have λ 1 ( L) ≤ λ 1 (L).
Proof.Let f : V (G) → R be the function orthogonal to D1 that achieves the minimum in the Rayleigh quotient for L. So

Ihara's Theorem
The transition probability matrix P defined above is a weighted version of an important matrix that comes up in the study of zeta functions on finite graphs.We define B to be the 2m × 2m matrix with rows and columns indexed by the set of directed edges of G as follows.
The matrix B can be thought of as a non-backtracking edge adjacency matrix, and the entries of B k describe the number of non-backtracking walks of length k from one directed edge to another, in the same way that the entries of powers of the adjacency matrix, A k , count the number of walks of length k from one vertex to another.The expression det(I − uB) is closely related to zeta functions on finite graphs which.A result known as Ihara's Theorem further relates such zeta functions to a determinant expression involving the adjacency matrix.While we will not go into zeta functions on finite graphs in this paper, the following result equivalent to Ihara's theorem will be of interest to us.
Ihara's Theorem.For a graph G on n vertices and m edges, let B be the matrix defined above, let A denote the adjacency matrix, D the diagonal degree matrix, and I the identity.Then We remark that the expression det(I − uB) is the characteristic polynomial of B evaluated at 1/u.In this way the complete spectrum of the matrix B is given by the reciprocals of the roots of the polynomial (1 − u 2 ) m−n det(I − uA + u 2 (D − I)).Numerous proofs of this result exist in the literature [9,8,3,15,10].For completeness, we will include here an elementary proof that uses only basic linear algebra.To the knowledge of the author, this proof is original.To begin, we will need a lemma giving a well-known property of determinants.Lemma 3. Let M be a k × l matrix, N a l × k matrix, and A an invertible k × k matrix.Then Proof.Note that Taking determinants of both sides gives the result.
Proof of Ihara's Theorem.Define S to be the 2m × n matrix so S is the endpoint incidence operator.Define T to be the n × 2m matrix given by so T is the starting point incidence operator.We will also define τ to be the 2m × 2m matrix giving the reversal operator that switches a directed edge with its opposite.That is, Now, a straightforward computation verifies that where u is chosen so that the matrix I + uτ is inverivle.
Observe that τ 2 = I, so that (I − uτ )(I + uτ ) = (1 − u 2 )I, so (I + uτ ) −1 = 1 1−u 2 (I − uτ ).Thus, applying (2) and (3), the above becomes where the last step is obtained by observing that det(I + uτ ) = (1 − u 2 ) m .This is the desired equality for our choice of u.This is a polynomial of finite degree in u, and there are infinitely many u that make I + uτ invertible, so the equality holds for all u.

A weighted Ihara's theorem
In this section, we will give a weighted version of Iharra's Theorem.The proof presented in the previous section does not lend itself well to generalization to the weighted setting, so we will not follow that strategy.Rather, we will follow the main ideas of the proof of Ihara's theorem found in [10] to obtain our weighted version of this result.
To each vertex x ∈ V (G) we assign a weight w(x) = 0, and let W be the n × n diagonal matrix given by W (x, x) = w(x).Define S and T to be the matrices from the proof of Ihara's Theorem in the previous section, and define S = SW and T = W T .So S is the weighted version of the endpoint vertex-edge incidence operator, and T is the weighted version of the starting point vertex-edge incidence operator.Define τ from the proof of Ihara's Theorem, and define τ to be the weighted version of τ , that is Finally, define the 2m × 2m matrix P by Then P is the weighted version of the non-backtracking edge adjacency matrix B seen above in Ihara's theorem, with w(b) 2 the weight on edge (a, b).We remark that if we take w(x) = 1/ √ d x − 1 for each x ∈ V (G), then P is exactly the transition probability matrix for a non-backtracking random walk on the directed edges of G defined in Section 2.2.This case is our primary focus, but we note that our computations apply for any arbitrary positive weights assigned to the vertices.Now, a straightforward computation verifies that P = S T − τ (5) and T S = W AW.
We will define Ã = W AW .Note that Ã(u, v) = w(u)w(v), so this is the adjacency matrix for the weighted graph with edge weights w(u)w(v).The matrix Ã is similar to W 2 A, so when w(x) = 1/ √ d x − 1, this is the matrix whose entries are the transition probabilities for a single step of a non-backtracking random walk G.
From ( 5) and ( 6) we obtain the following equations.
We define D to be the diagonal n × n matrix D(x, x) = v∼x w(x) 2 w(v) 2 and observe that T τ S = D.It then follows that We remark that in the proof in [10], they use the unweighted versions of each of these matrices, so τ rather than τ yields τ 2 = I.Hence S and T will factor through τ 2 , so that the u 2 τ 2 term stays on the right hand side of the above equations.Here we have τ 2 is a 2m × 2m diagonal matrix with τ 2 ((u, v), (u, v)) = w(u) 2 w(v) 2 .Depending on the w(u)'s this matrix might not behave nicely with respect to the action of S and T , hence the extra terms that need to stay on the left-hand side above.This difference from [10] is one of the primary difficulties in generalizing this result.
We will now perform a change of basis to see how the operator (I − u P )(I − uτ ) + u 2 τ 2 behaves with respect to the decomposition of the space of functions f : − → E → C as the direct sum of Image S and Ker ST .
To this end, fix any basis of the subspace Ker ST , and let R be the 2m × (2m − n) matrix whose columns are the vectors of that basis (note that S has rank n).Define M = S R .This will be our change of basis matrix.To obtain the inverse of M , form the matrix ( ST S) −1 ST (R T R) −1 R T and observe that Therefore we have that Applying this change of basis, direct computation, applying ( 7) and ( 9), yields Therefore, the matrix (I −u P )(I −uτ )+u 2 τ 2 is similar to the matrix so they have the same determinant.Thus, we have proven a weighted version of Ihara's Theorem, which we state as the following.
Theorem 4. Let G be a graph on n vertices and m edges, and assign an arbitrary positive weight w(x) > 0 assigned to each vertex x.Let P be the 2m × 2m weighted non-backtracking edge adjacency matrix with edge weight w(v) 2 assigned to edge (u, v) as defined in (4).Let Ã be the weighted n × n adjacency matrix with edge weight w(u)w(v) assigned to each edge.Let τ be the weighted reversal operator defined above, and D the n × n diagonal matrix with D(x, x) = v∼x w(x) 2 w(v) 2 as defined above.Then we have As a corollary to the decomposition in equation ( 11), if we take w(x) = 1 for all x, then τ 2 = I, and the usual unweighted Ihara's Theorem falls out immediately.
If we take w(x) = .This is clearly similar to the matrix (D − I) −1 A. So in this case Ã is similar to the matrix whose entries are the transition probabilities for a single step in a non-backtracking random walk.(Note, however, that (D − I) −1 A is not the transition probability matrix for a non-backtracking random walk.) 4 The mixing rate of non-backtracking random walks

An alternate proof for regular graphs
Applying the results of the previous section to regular graphs yields a different proof of the results from [1] on the mixing rate of non-backtracking random walks on regular graphs.
Let G be a regular graph where each vertex has degree d.Then choosing w(x) = 1/ √ d − 1 for all x yields gives us that P is the transition probability matrix for the non-backtracking random walk on G.We remark that, from the previous section, we have τ = Therefore, the decomposition in (11) becomes Noting that τ can be thought of as block diagonal with m blocks of the form 0 1/(d − 1) 1/(d − 1) 0 , then taking determinants, we find that where the product ranges over all the eigenvalues λ i of the adjacency matrix A for i = 1, • • • , n.As remarked previously, the left hand side det(I − u P ) is the characteristic polynomial of P evaluated at 1/u, so from this we obtain the spectrum of P .
Theorem 5. Let G be a d-regular graph with m edges and n vertices, and let P be the 2m × 2m transition probability matrix for a non-backtracking random walk as defined above.Then the eigenvalues of P are , where λ i ranges over the eigenvalues of the adjacency matrix A, and ±1/(d − 1) each have multiplicity m − n.
From this we obtain the result from [1].
Corollary 1.Let G be a non-bipartite, connected d-regular graph on n vertices for d ≥ 3, and let ρ and ρ denote the mixing rates of simple and non-backtracking random walk on G, respectively.Let λ be the second largest eigenvalue of the adjacency matrix of G in absolute value.
Proof.We remark that the expression is precisely the expression derived by Alon et al. in [1] for the mixing rate of a non-backtracking random walk on a regular graph, and we may proceed with the analysis of the convergence rate in the same way they do.The convergence rate is given by the second largest eigenvalue of P , which will be obtained setting λ to be the second largest eigenvalue of A. Let µ be this eigenvalue.
For 2 Since λ d is the second largest eigenvalue of the transition probability matrix P for the usual walk, the first case follows.
For λ < 2 √ d − 1, µ is complex, and we obtain We remark that in this case that λ < 2 √ d − 1, a classic result of Nilli ([14]) related to the Alon-Boppana Theorem implies that we are never too far below this bound.Indeed, the result states that if G is d-regular with diameter at least 2(k + 1), then λ ≥ 2 . With the restriction that d = n o (1) , then the diameter is at least is the adjacency matrix of G.
Note that τ is similar to a block diagonal matrix with blocks of the form 0 1/(c − 1) 1/(d − 1) 0 , so taking the determinant above we obtain We will look at the matrix Suppose the first part in the bipartition of G has size r, and the second part has size s, where without loss of generality, r > s.By row reduction, this has the same determinant as the matrix M T M .Now, the above determinant is given by the product of the eigenvalues of the matrix.Observe that if λ is an eigenvalue of the adjacency matrix A, then λ 2 is an eigenvalue of M T M .Therefore, in all we have det(I − where the product ranges over the s largest eigenvalues of A (or in other words, λ 2 i ranges of the s eigenvalues of M T M ).Therefore the characteristic polynomial is given by det .
Thus we can explicitly obtain the eigenvalues of P .
Theorem 6.Let G be a (c, d)-biregular graph, let the part with degree c have size r, and the part with degree d have size s, and assume without loss of generality that r ≥ s.Suppose G has n vertices and m edges.Then the eigenvalues of the non-backtracking transition probability matrix P defined above are with multiplicity m − n each , ± 1 √ d − 1 i with multiplicity r − s each as well as the 4 roots of the polynomial for each value of λ i ranging over the s positive eigenvalues of the adjacency matrix A. These roots are We can now give a version of Corollary 1 for (c, d)-biregular graphs.Let µ equal the expression (12), and consider the following cases.
cd, then µ is real.Direct computation verifies that, evaluating the expression (12) at λ = √ cd yields µ = 1 = λ/ √ cd and µ < λ/ √ cd for λ in this range.Therefore, in this case the eigenvalue of P always has smaller absolute value than the corresponding eigenvalue of P , implying ρ ≤ ρ.The lower bound follows from (12) ignoring the square root inside.Thus the first case follows.
If λ < √ c − 1 + √ d − 1, then µ is complex, and direct computation shows A version of the Alon-Boppana Theorem exists for (c, d)-biregular graphs as well, proven by Feng and Li in [6] (see also [12]).Observe that certainly the diameter is at least log cd n, so that the condition on the degrees and Theorem 7 imply that , so this gives the result for the second case.

Conclusion
We have looked at non-backtracking random walks from the point of view of walking along directed edges.
For the special cases of regular and biregular graphs, our weighted version of Ihara's Theorem (Theorem 4) has given us the comlete spectrum of the transition probability matrix for the non-bakctracking walk, allowing for easy comparison between the non-backracking mixing rate, and the mixing rate of the usual random walk.Clearly, it would be desirable to extend these reults to more general classes of graphs.The difficulty in applying Theorem 4 directly is with the term involving τ 2 .As seen in section 3, τ 2 is a 2m × 2m diagonal matrix with τ 2 ((u, v), (u, v)) = 1 (d u − 1)(d v − 1) .
In the case of regular and biregular graphs, this expression is constant (we get 1/(d − 1) and 1/(c − 1)(d − 1) for the d-regular and (c, d)-biregular cases respectively), making τ 2 simply a multiple of the identity.This allows the difficulty to be handled relatively easily.Regular and biragular graphs are in fact the only graphs for which τ 2 is a multiple of the identity, suggesting that these exact techniques will not work as nicely on more general classes of graphs.If a cleaner version of Theorem 4 could be proven, then, aside from being interesting in its own rite, it could potentially be used to extend our results on non-backtracking random walks.

4. 2 1 ( 1 (c− 1 ) 1 (( 1 .
Biregular graphsA graph G is called (c, d)-biregular if it is bipartite and each vertex in one part of the bipartition has degree c, and each vertex of the other part has degree d.In the weighted Ihara's Theorem, we have τ 2 ((u, v), (u, v) = du−1)(dv−1) , so in the case where G is (c, d)-biregular, then we have τ 2 = (d−1) I.So since τ 2 is a multiple of the identity, as with regular graphs, in the decomposition(11), the u 2 τ 2 term can be taken to the other side of the equation.Note that D is diagonal with D(u, u) = v∼u du−1)(dv−1) = c (c−1)(d−1) if u has degree c, or d (c−1)(d−1) if u has degree d.Then D − τ 2 is diagonal with entry c Hence the decomposition (11) becomes