^{1}

^{*}

^{2}

Inspired by previous work of Diaz, Petit, Serna, and Trevisan (Approximating layout problems on random graphs,
*Discrete Mathematics*, 235, 2001, 245-253), we show that several well-known graph layout problems are approximable to within a factor arbitrarily close to 1 of the optimal with high probability for random graphs drawn from an Erd
ös-Renyi distribution with appropriate sparsity conditions using only elementary probabilistic analysis. Moreover, we show that the same results hold for the analogous problems on directed acyclic graphs.

Many well-known optimization problems on graphs fall into the category of graph layout problems. A layout of a graph on n vertices consists of a bijection between the vertices of the graph and the set { 1,2, ⋯ , n } of natural numbers, which can be interpreted as arranging the vertices of the graph in some order on a line. A graph layout problem then consists of optimizing some objective function over the set of possible layouts of a graph. There is an analogous notion of layout and layout problems for directed acyclic graphs, i.e. directed graphs with no directed cycles { ( v 1 , v 2 ) , ( v 2 , v 3 ) , ⋯ , ( v n , v 1 ) } where ( v i , v i + 1 ) is a directed edge from the vertex v i to v i + 1 with the indices taken modulo n. A layout of a directed acyclic graph is simply a topological sort of it, so that the layout respects edge directions. The particular layout problems we consider in this paper are Minimum Cut Linear Arrangement (also known as Cutwidth), Vertex Separation, Edge Bisection, and Vertex Bisection, along with the analogous problems on directed acyclic graphs. These problems are well known to find applications in VLSI design, job scheduling, parallel computing, graph drawing, etc. We direct the interested reader to a survey [

Graph layout problems are often computationally difficult to solve exactly. The decision versions of both the undirected and directed vertex separation problems are known to be NP-complete [

In this paper, in addition to showing that the constant of approximation can be improved to any C > 1 with slightly weaker sparsity and convergence results, we show that the same result holds for the directed versions of the problems which were not considered in [

We first recall some terminology in [

L ( i , ϕ , G ) = { u ∈ V | ϕ ( u ) ≤ i }

R ( i , ϕ , G ) = { u ∈ V | ϕ ( u ) > i }

and the two objective functions

θ ( i , ϕ , G ) = | { u v ∈ E | u ∈ L ( i , ϕ , G ) ∧ v ∈ R ( i , ϕ , G ) } | ,

δ ( i , ϕ , G ) = | { u ∈ L ( i , ϕ , G ) | ∃ v ∈ R ( i , ϕ , G ) ∧ u v ∈ E } | .

are defined. We may interpret θ ( i , ϕ , G ) as the number of edges lying over the i-th “gap” in the arrangement, i.e. edges whose left vertex is in at most the i-th position in the arrangement and whose right vertex is in at least the ( i + 1 ) -th position. Additionally, we may interpret δ ( i , ϕ , G ) as the number of vertex to the left of the i-th “gap” which are connected by an edge to some vertex to the right of the gap. (In this paper, we sometimes casually refer to L ( i , ϕ , G ) as the set on the left and R ( i , ϕ , G ) as the set on the right.)

Let Λ G denote the set of linear arrangements of G. The problems we consider for undirected graphs are given in

We also consider analogous problems on directed graphs. Given a directed acyclic graph G = ( V , E ) with | V | = n , a linear arrangement (or layout) of G is an bijective function ϕ : V → { 1, ⋯ , n } such that if ( u , v ) ∈ E is a directed edge from u ∈ V to v ∈ V then ϕ ( u ) < ϕ ( v ) . Note that this is simply a topological sort of G, which exists as G is directed acyclic. Again, we let Λ G denote the set of linear arrangements of G. The definitions of L ( i , ϕ , G ) , R ( i , ϕ , G ) , θ ( i , ϕ , G ) , and δ ( i , ϕ , G ) remain unchanged for directed acyclic graphs. The following problems are directed versions of the problems listed in

• Directed cutwidth (DCUTWIDTH): Given a directed acyclic graph G = ( V , E ) , compute

MINDCW ( G ) = min ϕ ∈ Λ G DCW ( G , ϕ ) ,

where DCW ( G , ϕ ) = max i ∈ { 1 , ⋯ , n } θ ( i , ϕ , G ) .

Name | Label | Problem | NP-hardness |
---|---|---|---|

Cutwidth | CUTWIDTH | Compute MINCW ( G ) = min ϕ ∈ Λ G CW ( G , ϕ ) , where CW ( G , ϕ ) = max i ∈ { 1 , ⋯ , n } θ ( i , ϕ , G ) . | Proved in [ |

Vertex separation | VERTSEP | Compute MINVS ( G ) = min ϕ ∈ Λ G VS ( G , ϕ ) , where VS ( G , ϕ ) = max i ∈ { 1 , ⋯ , n } δ ( i , ϕ , G ) . | Proved in [ |

Edge bisection | EDGEBIS | Compute MINEB ( G ) = min ϕ ∈ Λ G EB ( G , ϕ ) , where EB ( G , ϕ ) = θ ( ⌊ n 2 ⌋ , ϕ , G ) . | Proved in [ |

Vertex bisection | VERTBIS | Compute MINVB ( G ) = min ϕ ∈ Λ G VB ( G , ϕ ) , where VB ( G , ϕ ) = δ ( ⌊ n 2 ⌋ , ϕ , G ) . | Proved in [ |

• Directed vertex separation (DVERTSEP): Given a directed acyclic graph G = ( V , E ) , compute

MINDVS ( G ) = min ϕ ∈ Λ G DVS ( G , ϕ ) ,

where DVS ( G , ϕ ) = max i ∈ { 1 , ⋯ , n } δ ( i , ϕ , G ) .

• Directed edge bisection (DEDGEBIS): Given a directed acyclic graph G = ( V , E ) , compute

MINDEB ( G ) = min ϕ ∈ Λ G DEB ( G , ϕ ) ,

where DEB ( G , ϕ ) = θ ( ⌊ n 2 ⌋ , ϕ , G ) .

• Directed vertex bisection (DVERTBIS): Given a directed acyclic graph G = ( V , E ) , compute

MINDVB ( G ) = min ϕ ∈ Λ G DVB ( G , ϕ ) ,

where DVB ( G , ϕ ) = δ ( ⌊ n 2 ⌋ , ϕ , G ) .

For each arrangement problem considered above, we also define the maximum-cost solution of the problem on a graph. For example, for CUTWIDTH, in contrast to MINCW ( G ) , we define MAXCW ( G ) = max ϕ CW ( G , ϕ ) , and similarly for every other problem considered above. Moreover, we define the gap of a problem on a given graph G to be the ratio of the maximum-cost solution to the minimum-cost solution. For example, for CUTWIDTH, the gap is

GAPCW ( G ) = MAXCW ( G ) MINCW ( G ) ,

and this quantity is defined in the same way for every other arrangement problem considered above.

Any discussion on random graphs requires a probability distribution on graphs. In this paper, we adopt a variant of the Erdös-Renyi probability distribution [

For a positive integer n and probability 0 ≤ p ≤ 1 , the Erdös-Renyi distribution G ( n , p ) on the set of n-vertex graphs assigns an n-vertex graph

G = ( V , E ) probability p m ( 1 − p ) ( n 2 ) − m , where | E | = m . That is, we sample n-vertex graphs by including each possible edge with probability p.

For a probability distribution on directed acyclic graphs, we use a variant of the Erdös-Renyi probability distribution [

For a positive integer n and probability 0 ≤ p ≤ 1 , the distribution D ( n , p ) on the set of n-vertex directed acyclic graphs first samples a random graph from G ( n , p ) on the vertex set { 1, ⋯ , n } and orients each edge { i , j } from i to j if i < j .

As the edges in the sampled directed graph always point from a lower numbered vertex to a higher numbered vertex, it is clear that the sampled graph is acyclic.

We first list some technical lemmas necessary for carrying out the probabilistic analysis in our main theorems.

Lemma 1 (Hoeffding’s inequality). Suppose that X 1 , ⋯ , X n are independent identically distributed Bernoulli random variables with mean p, and let H ( n ) = X 1 ¯ + X 2 ¯ + ⋯ + X n ¯ , where X i ¯ is a sample of X i . Then for γ > 0 ,

P [ H ( n ) ≤ ( p − γ ) n ] ≤ exp ( − 2 γ 2 n )

P [ H ( n ) ≥ ( p + γ ) n ] ≤ exp ( − 2 γ 2 n ) .

Proof. This is a special case of Theorem 1 in Hoeffding’s original paper [

Lemma 2. If k = o ( n ) , then

log ( n k ) = ( 1 + o ( 1 ) ) k log n k

Proof. Recall the well-known inequalities

( n k ) k ≤ ( n k ) ≤ ( n e k ) k ,

which can be obtained via Stirling’s approximation. It follows that

k log n k ≤ log ( n k ) ≤ k ( log n k + 1 ) .

If k = o ( n ) , then log n k → ∞ . By the above chain of inequalities, we have that

log ( n k ) = ( 1 + o ( 1 ) ) k log n k ,

as desired.

Lemma 3. Suppose that f ( n ) = Ω ( n − c ) and g ( n ) = O ( n d ) where 0 < c < d . If f ( n ) = o ( 1 ) , then

lim n → ∞ ( 1 − f ( n ) ) g ( n ) = 0.

Proof. Taking logarithms, we find that

log ( lim n → ∞ ( 1 − f ( n ) ) g ( n ) ) = lim n → ∞ g ( n ) log ( 1 − f ( n ) ) ≤ lim n → ∞ k 1 n d log ( 1 − k 2 n − c )

for appropriate constants k 1 , k 2 > 0 . But then by L’Hopital’s rule,

lim n → ∞ k 1 n d log ( 1 − k 2 n − c ) = lim n → ∞ log ( 1 − k 2 n − c ) k 1 n − d = − lim n → ∞ c k 2 n − c − 1 ( 1 − k 2 n − c ) d k 1 n − d − 1 .

Since 0 < c < d , we have that

c k 2 n − c − 1 ( 1 − k 2 n − c ) d k 1 n − d − 1 = ( c k 2 ( 1 − k 2 n − c ) d k 1 ) n d − c → ∞

as n → ∞ . It follows that

log ( lim n → ∞ ( 1 − f ( n ) ) g ( n ) ) = − lim n → ∞ c k 2 n − c − 1 ( 1 − k 2 n − c ) d k 1 n − d − 1 = − ∞ .

Hence,

lim n → ∞ ( 1 − f ( n ) ) g ( n ) = 0

as desired.

For the theorems that follow, let { G n } n = 1 ∞ be a sequence of random graphs such that for each n = 1 , 2 , ⋯ , G n is given by G ( n , p n ) with edge probability p n . The following theorems show that for each of the undirected graph arrangement problems GAPCW, GAPEP, GAPVS, and GAPVB that the ratio of the maximum value to the minimum value of the corresponding objective function over all arrangements of G n is asymptotically close to 1 with high probability, subject to appropriate sparsity conditions.

Theorem 1. Let p n satisfy p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPCW ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 2. Let p n satisfy p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPEB ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 3. Let p n satisfy p n = Ω ( n − c ) , c < 1 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPVS ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 4. Let p n satisfy p n = Ω ( n − c ) , c < 1 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPVB ( G n ) < 1 + δ ] > 1 − ϵ .

To prove Theorem 1, we first establish the following lemmas:

Lemma 4. Let p n satisfy p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ MINCW ( G n ) ≥ n 2 p n ( 1 − δ ) 4 ] > 1 − ϵ .

Lemma 5. Suppose that p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ MAXCW ( G n ) ≤ n 2 p n ( 1 + δ ) 4 ] > 1 − ϵ .

We will make use of the following definition in our proofs for the above lemmas: For a graph G = ( V , E ) and A ⊆ V , c ( G , A ) is defined to be the number of edges joining a vertex in A and a vertex in V − A . That is,

c ( G , A ) = | { u v ∈ E | u ∈ A ∧ v ∈ V − A } | .

Proof of Lemma 4. For a linear arrangement ϕ ∈ Λ G , define

L ϕ = L ( ⌊ n / 2 ⌋ , ϕ , G ) = { u ∈ V | ϕ ( u ) ≤ ⌊ n / 2 ⌋ } .

Clearly, CW ( G ) ≥ c ( G , L ϕ ) . It follows that MINCW ( G ) ≥ min ϕ c ( G , L ϕ ) . Suppose that | V | = n . Observe that min ϕ c ( G , L ϕ ) = min S ∈ B c ( G , S ) where B = { S ⊆ V | | S | = ⌊ n / 2 ⌋ } . Hence, for all α ≥ 0 ,

P [ MINCW ( G n ) ≥ α ] ≥ P [ min S ∈ B c ( G n , S ) ≥ α ] .

To prove the lemma, it suffices to show that

P [ min S ∈ B c ( G n , S ) ≥ n 2 p n ( 1 − δ ) 4 ] > 1 − ϵ

for n sufficiently large.

Let S be an arbitrary subset of V with | S | = ⌊ n / 2 ⌋ . As G n is an Erdös-Renyi random graph with vertex set V and edge probability p n , c ( G n , S ) is a binomial random variable with mean μ = ⌊ n / 2 ⌋ ⌈ n / 2 ⌉ p n ≥ ( n 2 − 1 ) p n 4 .

Applying the first inequality in Lemma 1 with γ = γ n where γ n > 0 , we obtain that

P [ c ( G n , S ) ≤ μ ( 1 − γ n p n ) ] ≤ exp ( − ( n 2 − 1 ) γ n 2 / 2 ) .

As p n = Ω ( n − c ) with c < 1 / 2 , we can choose γ n to get the desired convergence. Indeed, setting γ n = n − l where l satisfies c < l < 1 / 2 , we have γ n = o ( p n ) and γ n 2 = Ω ( n − 1 + s ) for some s > 0 . Hence,

P [ c ( G n , S ) ≤ μ ( 1 − γ n p n ) ] ≤ exp ( − ( n 2 − 1 ) γ n 2 / 2 ) = exp ( − ( n 2 − 1 ) Ω ( n − 1 + s ) ) = exp ( − Ω ( n 1 + s ) ) .

Note that ( n ⌊ n / 2 ⌋ ) ≤ 2 n . Hence, by the union bound,

P [ V S ∈ B c ( G n , S ) ≤ μ ( 1 − γ n p n ) ] ≤ ∑ S ∈ B P [ c ( G n , S ) ≤ μ ( 1 − γ n p n ) ] ≤ ∑ S ∈ B exp ( − Ω ( n 1 + s ) ) = ( n ⌊ n / 2 ⌋ ) exp ( − Ω ( n 1 + s ) ) ≤ 2 n exp ( − Ω ( n 1 + s ) ) < ϵ

for n sufficiently large. Thus,

P [ min S ∈ B c ( G n , S ) ≥ μ ( 1 − γ n p n ) ] ≥ 1 − P [ V S ∈ B c ( G n , S ) ≤ μ ( 1 − γ n p n ) ] > 1 − ϵ

for n sufficiently large.

Since γ n p n = O ( n c − l ) = o ( 1 ) , for a sufficiently large n,

μ ( 1 − γ n p n ) ≥ ( n 2 − 1 ) p n 4 ( 1 − γ n p n ) ≥ n 2 p n ( 1 − δ ) 4 .

Hence, for n sufficiently large,

P [ min S ∈ B c ( G n , S ) ≥ n 2 p n ( 1 − δ ) 4 ] > 1 − ϵ

as desired.

Proof of 5. Let ϵ , δ > 0 . Observe that for all α ≥ 0 ,

P [ MAXCW ( G n ) ≤ α ] = P [ max S ⊆ V | S | ≤ ⌊ n ⌋ / 2 c ( G n , S ) ≤ α ] ,

and that

P [ c ( G n , S ) ≤ α ] ≥ P [ c ( G n , T ) ≤ α ]

for all S , T ⊆ V such that | S | ≤ ⌊ n / 2 ⌋ and | T | = ⌊ n / 2 ⌋ . To see the second inequality, note that c ( G n , S ) is the sum of | S | ( n − | S | ) i.i.d. Bernoulli random variables with probability of success p n and | S | ( n − | S | ) is maximized at | S | = ⌊ n / 2 ⌋ . Hence, to prove the lemma, it suffices to show that

P [ max T ⊆ V | T | = ⌊ n ⌋ / 2 c ( G n , T ) ≤ n 2 p n ( 1 + δ ) 4 ] > 1 − ϵ

for n sufficiently large.

Let B denote the set { S ⊆ V | | S | = ⌊ n / 2 ⌋ } . Let T ∈ B . Then, by the second inequality in Lemma 1 with γ = γ n where γ n > 0 , we obtain that

P [ c ( G n , T ) ≥ μ ( 1 + γ n p n ) ] ≤ exp ( − ( n 2 − 1 ) γ n 2 / 2 )

where μ = ⌊ n / 2 ⌋ ⌈ n / 2 ⌉ p n as in the proof of Lemma 4.

As p n = Ω ( n − c ) with c > 1 / 2 , we again set γ n = n − l where l satisfies c < l < 1 / 2 , so that γ n = o ( p n ) and γ n 2 = Ω ( n − 1 + s ) for some s > 0 . Then,

P [ c ( G n , T ) ≥ μ ( 1 + γ n p n ) ] ≤ exp ( − Ω ( n 1 + s ) ) .

Hence,

∑ T ∈ B P [ c ( G n , T ) ≥ μ ( 1 + γ n p n ) ] ≤ 2 n exp ( − Ω ( n 1 + s ) ) < ϵ

for a sufficiently large n.

As γ n p n = O ( n c − l ) , we have that

n 2 p n ( 1 + γ n p n ) 4 ≤ n 2 p n ( 1 + δ ) 4 ,

for a sufficiently large n. Thus, for n sufficiently large,

P [ max T ∈ B c ( G n , T ) ≤ n 2 p n ( 1 + δ ) 4 ] ≥ 1 − ∑ T ∈ B P [ c ( G n , T ) ≥ n 2 p n ( 1 + γ n p n ) 4 ] > 1 − ϵ

as desired.

With these two lemmas, the main theorem for the cut width gap can be readily established.

Proof of Theorem 1. As in the statement of the theorem, let p n satisfy p n = Ω ( n − c ) for some c < 1 / 2 , and let δ , ϵ > 0 be given. Since

lim x → 0 1 + x 1 − x = 1 ,

there exists δ ′ satisfying 1 + δ ′ 1 − δ ′ < 1 + δ . By Lemma 4, there exists N 1 such that for n ≥ N 1 ,

P [ MINCW ( G n ) ≥ n 2 p n ( 1 − δ ′ ) 4 ] > 1 − ϵ / 2 .

By Lemma 5, there exists N 2 such that for n ≥ N 2 ,

P [ MAXCW ( G n ) ≤ n 2 p n ( 1 + δ ′ ) 4 ] > 1 − ϵ / 2 .

Hence, if N = max { N 1 , N 2 } , then for n ≥ N we have that

1 − ϵ = 1 − 2 ( ϵ / 2 ) ≤ P [ ( MINCW ( G n ) ≥ n 2 p n ( 1 − δ ′ ) 4 ) ∧ ( MAXCW ( G n ) ≤ n 2 p n ( 1 + δ ′ ) 4 ) ] .

As ( a ≥ b > 0 ) ∧ ( 0 < c ≤ d ) ⇒ c a ≥ d b , it follows that

P [ MAXCW ( G n ) MINCW ( G n ) ≤ 1 + δ ′ 1 − δ ′ ] > 1 − ϵ .

Thus, P [ GAPCW ( G n ) < 1 + δ ] > 1 − ϵ as desired.

Since edge bisection is essentially a restricted version of the cutwidth problem, it is straightforward to carry over the proofs above to prove Theorem 2.

Proof of Theorem 2. Note that for a graph G, MINEB ( G ) is simply the minimization of c ( G , S ) over subsets S ⊂ V with | S | = ⌊ n / 2 ⌋ , and MAXEB ( G ) the corresponding maximization. Hence, the proof of Lemma 4 carries through to give that

P [ MINEB ( G n ) ≥ n 2 p n ( 1 − δ ) 4 ] > 1 − ϵ

for any given ϵ , δ > 0 when n is sufficiently large.

Similarly, the proof of Lemma 5 carries through to yield that

P [ MAXEB ( G n ) ≤ n 2 p n ( 1 + δ ) 4 ] > 1 − ϵ

for any given ϵ , δ > 0 when n is sufficiently large. Combining these two results in the same manner as in the proof of Theorem 1 gives the desired result.

The following lemma essentially proves Theorem 3.

Lemma 6. Let p n satisfy p n = Ω ( n − c ) , c < 1 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ MINVS ( G n ) ≥ ( 1 − δ ) n − 1 ] > 1 − ϵ .

We will make use of the following definition in our proof for the above lemma. For a graph G = ( V , E ) and A ⊆ V , v ( G , A ) is defined to be the number of vertices in A which are connected to a vertex in V − A . That is,

v ( G , A ) = | { u ∈ A | ∃ v ∈ V − A ∧ u v ∈ E } | .

Proof of Lemma 6. So as to obtain the desired convergence, for each positive integer n, set

δ n = n − l , ϵ n = n − s ,

where 0 < s < 1 − c , 0 < l < s 2 . Consider a linear arrangement ϕ ∈ Λ G n . Note that for any k, VS ( G n ) ≥ v ( G n , L ( k , ϕ , G n ) ) . In particular, if we define

S ϕ , ε n = L ( ⌊ ( 1 − ε n ) n ⌋ , ϕ , G n ) ,

then VS ( G n ) ≥ v ( G n , S ϕ , ϵ n ) . It follows then that MINVS ( G n ) ≥ min ϕ v ( G n , S ϕ , ϵ n ) . Observe that

min ϕ v ( G n , S ϕ , ϵ n ) = min S ∈ B v ( G n , S )

where B = { S ⊆ V | | S | = ⌊ ( 1 − ϵ n ) n ⌋ } . Hence, for all α ≥ 0 ,

P [ MINVS ( G n ) ≥ α ] ≥ P [ min S ∈ B v ( G n , S ) ≥ α ] .

To prove the lemma, it suffices to show that for any δ , ϵ > 0 ,

P [ min S ∈ B v ( G n , S ) ≥ ( 1 − δ ) n − 1 ] > 1 − ϵ

for n sufficiently large.

Let S be an arbitrary element of B . As there are ⌈ ϵ n n ⌉ vertices in V − S , the probability of a given vertex v ∈ S not being connected to any vertex in V − S is ( 1 − p n ) ⌈ ϵ n n ⌉ . Hence, v ( G n , S ) is a binomial random variable on m = ⌊ ( 1 − ϵ n ) n ⌋ trials with event probability q = 1 − ( 1 − p n ) ⌈ ϵ n n ⌉ . By Lemma 1, we have that

P [ v ( G n , S ) ≤ ( q − δ n ) m ] ≤ exp ( − 2 δ n 2 m ) .

As | B | = ( n ⌈ ϵ n n ⌉ ) ,

P [ V S ∈ B v ( G n , S ) ≤ ( q − δ n ) m ] ≤ ∑ S ∈ B P [ v ( G n , S ) ≤ ( q − δ n ) m ] = ( n ⌈ ϵ n n ⌉ ) P [ v ( G n , S ) ≤ ( q − δ n ) m ] ≤ ( n ⌈ ϵ n n ⌉ ) exp ( − 2 δ n 2 m ) .

The first and third lines of the above computation follow respectively from the union bound and the preceding inequality. The second line follows from the fact that if S , S ′ are any two elements of B , then the distributions of the random variables v ( G n , S ) , v ( G n , S ′ ) with G n sampled from G ( n , p n ) are isomorphic via any permutation of the vertex set { 1,2, ⋯ , n } which carries S to S ′ . In particular

P [ v ( G n , S ) ≤ ( q − δ n ) m ] = P [ v ( G n , S ′ ) ≤ ( q − δ n ) m ] ,

so that the second line follows from the fact that | B | = ( n ⌈ ϵ n n ⌉ ) .

By Lemma 2, we find that

log ( n ⌈ ϵ n n ⌉ ) = ( 1 + o ( 1 ) ) ⌈ ϵ n n ⌉ log 1 ϵ n

since ⌈ ϵ n n ⌉ = ⌈ n 1 − s ⌉ = o ( n ) , so that

log P [ V S ∈ B v ( G n , S ) ≤ ( q − δ n ) m ] ≤ log ( ( n ⌈ ϵ n n ⌉ ) exp ( − 2 δ n 2 m ) ) = log ( n ⌈ ϵ n n ⌉ ) + log exp ( − 2 δ n 2 m ) = ( 1 + o ( 1 ) ) ⌈ ϵ n n ⌉ log 1 ϵ n − 2 δ n 2 m .

Substituting in our definitions δ n = n − l , ϵ n = n − s , we find that since ⌊ n ⌋ = n ( 1 − o ( 1 ) ) ,

log P [ V S ∈ B v ( G n , S ) ≤ ( q − δ n ) m ] ≤ ( 1 + o ( 1 ) ) ⌈ n 1 − s ⌉ log ( n s ) − 2 n − 2 l ⌊ ( 1 − n − s ) n ⌋ = ( s + o ( 1 ) ) ⌈ n 1 − s ⌉ log ( n ) − 2 n 1 − 2 l ( 1 − n − s ) ( 1 − o ( 1 ) ) .

This expression tends to negative infinity as n → ∞ since 1 − 2 l > 1 − s by the condition that l < s 2 , implying that

P [ V S ∈ B v ( G n , S ) ≤ ( q − δ n ) m ]

tends to 0 from above. Thus, for n sufficiently large, we have that

P [ V S ∈ B v ( G n , S ) ≤ ( q − δ n ) m ] < ϵ ,

and hence

P [ min S ∈ B v ( G n , S ) ≥ ( q − δ n ) m ] > 1 − ϵ .

Moreover, if we substitute in our definitions q = 1 − ( 1 − p n ) ⌈ ϵ n n ⌉ , δ n = n − l , m = ⌊ ( 1 − ϵ n ) n ⌋ , we find that

( q − δ n ) m = ( 1 − ( 1 − p n ) ⌈ n 1 − s ⌉ − n − l ) ⌊ ( 1 − n − s ) n ⌋ .

Since by definition s < 1 − c , where p n = Ω ( n − c ) , we have that 1 − s > c , and hence by Lemma 3, we have that ( 1 − p n ) ⌈ n 1 − s ⌉ → 0 as n → ∞ . Thus,

( q − δ n ) m = ( 1 − ( 1 − p n ) ⌈ n 1 − s ⌉ − n − l ) ⌊ ( 1 − n − s ) n ⌋ = ( 1 − o ( 1 ) ) ⌊ ( 1 − n − s ) n ⌋ = ( 1 − o ( 1 ) ) n − 1 ,

so that for any δ > 0 ,

( q − δ n ) m ≥ ( 1 − δ ) n − 1.

for n sufficiently large. Hence, for any δ , ϵ > 0 , we find that for n sufficiently large,

P [ MINVS ( G n ) ≥ ( 1 − δ ) n − 1 ] ≥ P [ MINVS ( G n ) ≥ ( q − δ n ) m ] ≥ P [ min S ∈ B v ( G n , S ) ≥ ( q − δ n ) m ] > 1 − ϵ ,

as desired.

Proof of Theorem 3. Observe that MAXVS ( G n ) ≤ n − 1 . Let δ , ϵ > 0 be given. Since n − 1 ( 1 − x ) n − 1 → 1 from above as x → 0 , set δ ′ so that for large n, n − 1 ( 1 − δ ′ ) n − 1 < 1 + δ . By Lemma 6, there exists a natural number N such that for n ≥ N ,

P [ MINVS ( G n ) ≥ ( 1 − δ ′ ) n − 1 ] > 1 − ϵ .

Thus, with probability greater than 1 − ϵ ,

MAXVS ( G n ) MINVS ( G n ) ≤ n − 1 ( 1 − δ ′ ) n − 1 < 1 + δ ,

so that

P [ GAPVS ( G n ) < 1 + δ ] > 1 − ϵ

as desired.

Even though our proof of Theorem 4 does not follow quite as readily from the proof of Theorem 3 as for the edge bisection/cut width case above, it does not require any new techniques.

Proof of Theorem 4. Let δ , ϵ > 0 be given. Since 1 1 − x → 1 from above as x → 0 , choose δ ′ > 0 so that 1 1 − δ ′ < 1 + δ . As MAXVB ( G n ) ≤ ⌊ n 2 ⌋ trivially,

we focus on the lower bound for MINVB ( G n ) . Set δ n = n − s such that 1 − s > c , where p n = n − c . Since δ n → 0 , we have that

P [ MINVB ( G n ) ≤ ⌊ n 2 ⌋ ( 1 − δ ′ ) ] ≤ P [ MINVB ( G n ) ≤ ⌊ n 2 ⌋ ( 1 − δ n ) ]

for all n sufficiently large. We will show that

P [ MINVB ( G n ) ≤ ⌊ n 2 ⌋ ( 1 − δ n ) ] → 0,

which will prove the theorem.

For all v ∈ V , we denote the set of neighbors of v by N ( v ) ; that is, N ( v ) = { u ∈ V | u v ∈ E } . If ϕ ∈ Λ G n is a linear arrangement such that VB ( G n , ϕ ) ≤ ⌊ n 2 ⌋ ( 1 − δ n ) , then there must exist a subset S ⊂ V with | S | ≥ ⌊ n 2 ⌋ δ n such that

| ∩ v ∈ S N ( v ) c | ≥ ⌊ n 2 ⌋ ,

where N ( v ) c is the complement of N ( v ) in V. Indeed, if VB ( G n , ϕ ) ≤ ⌊ n 2 ⌋ ( 1 − δ n ) then by definition there exist δ n ⌊ n 2 ⌋ vertices in the left half of the arrangement ϕ which are not connected to any of the ⌊ n 2 ⌋ vertices in the right half.

We estimate the probability that such a set S can exist. Note that it suffices to bound the probability that such an S with | S | = ⌈ δ n ⌊ n 2 ⌋ ⌉ can exist since for any such S with | S | ≥ δ n ⌊ n 2 ⌋ , we have that there exists a subset S ′ ⊂ S with | S ′ | = ⌈ δ n ⌊ n 2 ⌋ ⌉ satisfying | ∩ v ∈ S ′ N ( v ) c | . Let S ⊂ V of size ⌈ δ n ⌊ n 2 ⌋ ⌉ be fixed. Each vertex u ∈ S c lies in N ( v ) c with probability 1 − p n , so that for each u ∈ S c ,

P [ u ∈ ∩ v ∈ S N ( v ) ] = ( 1 − p n ) | S | = ( 1 − p n ) ⌈ δ n ⌊ n 2 ⌋ ⌉ = ( 1 − p n ) n δ n ( 1 + o ( 1 ) ) / 2 .

Let X n be the random variable over the set of random graphs on n vertices with edge probability p n whose value is the cardinality of ∩ v ∈ S N ( v ) c . (Recall that S has been fixed.) Applying the above reasoning to each vertex u ∈ S c , we

see that X n is a Bernoulli random variable on N = | S c | = n ( 1 − δ n 2 ) ( 1 + o ( 1 ) ) trials with event probability q = ( 1 − p n ) n δ n ( 1 + o ( 1 ) ) / 2 . Since 1 − s > c by construction, we have that

q = ( 1 − p n ) n δ n ( 1 + o ( 1 ) ) / 2 = ( 1 − p n ) n 1 − s ( 1 + o ( 1 ) ) / 2 → 0

as n → ∞ by Lemma 3. Letting N , q be as before, set

γ = n 2 N − q = ( 1 2 ( 1 − δ n / 2 ) − ( 1 − p n ) n δ n / 2 ) ( 1 + o ( 1 ) ) ,

so that ( q + γ ) N = n / 2 , and γ = 1 2 ( 1 + o ( 1 ) ) . Using Hoeffding’s inequality in Lemma 1 with q , N , γ , we obtain that

P [ X n ≥ n / 2 ] = P [ X n ≥ ( q + γ ) N ] ≤ exp ( − 2 γ 2 N ) = exp ( − 2 ( 1 2 ) 2 N ( 1 + o ( 1 ) ) ) = exp ( − n 2 ( 1 − δ n 2 ) ( 1 + o ( 1 ) ) ) = exp ( − n 2 ( 1 + o ( 1 ) ) ) .

By the union bound, the probability that there exists any such S among the subsets of V of size k = ⌈ δ n ⌊ n 2 ⌋ ⌉ is at most

( n k ) exp ( − n 2 ( 1 + o ( 1 ) ) ) .

Since k = n 1 − s ( 1 + o ( 1 ) ) / 2 , by Lemma 2, we have the estimate

log ( n k ) = ( 1 + o ( 1 ) ) k log n k = ( 1 + o ( 1 ) ) n 1 − s 2 log 2 n s = Θ ( n 1 − s log n ) ,

so that ( n k ) = exp ( Θ ( n 1 − s log n ) ) . Thus, the union bound probability that there exists any such S is at most

( n k ) exp ( − n 2 ( 1 + o ( 1 ) ) ) = exp ( Θ ( n 1 − s log n ) exp ( − n 2 ( 1 + o ( 1 ) ) ) ) = exp ( Θ ( n 1 − s log n − n 2 ) ) → 0

as n → ∞ , since the − n 2 term dominates the Θ ( n 1 − s log n ) term.

We conclude that

P [ MINVB ( G n ) ≤ ⌊ n 2 ⌋ ( 1 − δ ′ ) ] ≤ P [ MINVB ( G n ) ≤ ⌊ n 2 ⌋ ( 1 − δ n ) ] ≤ ϵ

for all δ ′ , ϵ with n sufficiently large. As P [ MAXVB ( G n ) ≤ ⌊ n 2 ⌋ ] = 1 , since we chose δ ′ to satisfy 1 1 − δ ′ < 1 + δ , we conclude that

P [ GAPVB ( G n ) ≤ 1 + δ ] ≥ P [ MAXVB ( G n ) ≤ ⌊ n 2 ⌋ ∧ MINVB ( G n ) ≥ ⌊ n 2 ⌋ ( 1 − δ ′ ) ] = P [ MINVB ( G n ) ≥ ⌊ n 2 ⌋ ( 1 − δ ′ ) ] ≥ 1 − ϵ

for large enough n, as desired.

We now show that the analogous versions of the above problems for undirected graphs hold for directed graphs. Fortunately, no new techniques are required and the desired results for directed graphs follow immediately from our work for undirected graphs.

For the theorems that follow, let { G n } n = 1 ∞ be a sequence of random directed acyclic graphs such that for each n = 1 , 2 , ⋯ , G n is independently sampled from a D ( n , p n ) Erdös-Renyi distribution with edge probability p n . Similar to before, the following theorems show that for each of the directed arrangement problems GAPDCW, GAPDEB, GAPDVS, and GAPDVB that the ratio of the maximum value to the minimum value of the corresponding objective function over all arrangements of G n is asymptotically close to 1 with high probability, subject to appropriate sparsity conditions.

Theorem 5. Let p n satisfy p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPDCW ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 6. Let p n satisfy p n = Ω ( n − c ) , c < 1 / 2 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPDEB ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 7. Let p n satisfy p n = Ω ( n − c ) , c < 1 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPDVS ( G n ) < 1 + δ ] > 1 − ϵ .

Theorem 8. Let p n satisfy p n = Ω ( n − c ) , c < 1 . Then for all ϵ > 0 , δ > 0 there exists an N such that for all n ≥ N ,

P [ GAPDVB ( G n ) < 1 + δ ] > 1 − ϵ .

We illustrate the proof for Theorem 5; the proofs of Theorems 6, 7, 8 will follow verbatim.

Proof of 5. For G n a directed acyclic graph sampled from D ( n , p n ) , let

MAXCW ( G n ) , MINCW ( G n ) , GAPCW (Gn)

denote the quantities MAXCW, MINCW, GAPCW for the undirected graph corresponding to G n ; that is, G n with every directed edge replaced with an undirected edge. Recall that MAXCW ( G n ) is the maximum of CW ( G n , ϕ ) over all permutations ϕ of the vertex set V of G n , whereas MAXDCW ( G n ) is the maximum of CW ( G n , ϕ ) over all topological sorts ϕ of G n . Since the set of topological sorts is a subset of the set of permutations, we conclude that

MAXDCW ( G n ) ≤ MAXCW ( G n ) .

Similarly, we deduce that

MINDCW ( G n ) ≥ MINCW ( G n ) ,

so that

GAPDCW ( G n ) = MAXDCW ( G n ) MINDCW ( G n ) ≤ MAXCW ( G n ) MINCW ( G n ) = GAPCW ( G n ) .

Note that this holds for all directed acyclic graphs G n drawn from D ( n , p n ) .

In order to derive a gap convergence statement for GAPDCW from the convergence result for GAPCW, we need to relate the distributions D ( n , p n ) and G ( n , p n ) . Let ϕ be the map from the underlying set of G ( n , p n ) to the underlying set of D ( n , p n ) which takes an undirected graph G n on the vertex set { 1,2, ⋯ , n } to the corresponding directed graph with the oriented edge ( i , j ) if i < j and { i , j } is an edge of G n . By the definition of D ( n , p n ) we have that ϕ is a bijection between the underlying sets of G ( n , p n ) and D ( n , p n ) , with inverse just given by replacing each directed edge with the corresponding undirected edge. Moreover, by the definition of D ( n , p n ) we have that ϕ preserves probabilities between G ( n , p n ) and D ( n , p n ) , so that ϕ is an isomorphism between the probability distributions G ( n , p n ) and D ( n , p n ) .

Thus, let p n , ϵ , δ be as in the statement of Theorem 5. By Theorem 1, there exists an N such that for all n ≥ N ,

P [ GAPCW ( G n ) < 1 + δ ] > 1 − ϵ ,

where G n is sampled from G ( n , p n ) . By the isomorphism between D ( n , p n ) and G ( n , p n ) , we have that the same statement holds if G n is instead sampled from D ( n , p n ) and GAPCW ( G n ) is defined as previously by taking the underlying undirected graph. Since

GAPDCW ( G n ) ≤ GAPCW (Gn)

for all G n sampled from D ( n , p n ) , we conclude that

P [ GAPDCW ( G n ) < 1 + δ ] ≥ P [ GAPCW ( G n ) < 1 + δ ] > 1 − ϵ

for all n ≥ N , where G n is sampled from D ( n , p n ) . This proves Theorem 5.

The same observation that the set of topological sorts of a directed graph is a subset of the set of permutations of the vertices implies that the quantities GAPDEB, GAPDVS, GAPDVB are bounded above by the corresponding undirected quantities as in the above proof. Thus, Theorems 6, 7, 8 follow precisely from the corresponding undirected versions, Theorems 2, 3, 4, as desired.

In this paper, we have shown that many graph layout problems of interest can be approximated arbitrarily close to the optimal with high probability for large random graphs with appropriate sparsity conditions. We note that there is still room for improvement with our results. The previous factor of 2 approximations in [

Question: Can the factor of 1 approximation results for MINCW, MINEB, be proven for random graphs with p n = Ω ( n − 1 ) , or more generally p n = Ω ( f ( n ) ) for f ( n ) = o ( n − 1 / 2 ) ? If not, can one determine the barrier between p n = Ω ( n − 1 / 2 ) and p n = Ω ( n − 1 ) where the factor of 1 approximation no longer holds and must be replaced by a factor of 2?

Both outcomes to the above question would be interesting, but we would find it more surprising if there were such a “sparsity barrier” between factor of 1 and factor of 2 approximations. Moreover, the results in [

Question: Can the factor of 1 approximation results for MINCW, MINVS, MINEB, MINVB be proven with δ = O ( n − 1 ) with δ as above, or at least for a δ which does not depend on the asymptotics for p n ?

Our asymptotics for ϵ as above do technically depend on p n , but regardless of p n our asymptotics for ϵ are still competitive with the ϵ = 2 − Ω ( n ) bound in [

The authors would like to thank Emmanuel Ruiz and Ashkan Moatamed for conversations on research involving graph layout problems. Additionally, the research in this paper was made possible by the support of the Fields Institute through its 2017 Fields Undergraduate Summer Research Program.

The authors declare no conflicts of interest regarding the publication of this paper.

Cheung, K.K.H. and Girardet, P. (2020) Improved Approximation of Layout Problems on Random Graphs. Open Journal of Discrete Mathematics, 10, 13-30. https://doi.org/10.4236/ojdm.2020.101003