On the stability of stochastic jump kinetics

Motivated by the lack of a suitable constructive framework for analyzing popular stochastic models of Systems Biology, we devise conditions for existence and uniqueness of solutions to certain jump stochastic differential equations (SDEs). Working from simple examples we find reasonable and explicit assumptions on the driving coefficients for the SDE representation to make sense. By `reasonable' we mean that stronger assumptions generally do not hold for systems of practical interest. In particular, we argue against the traditional use of global Lipschitz conditions and certain common growth restrictions. By `explicit', finally, we like to highlight the fact that the various constants occurring among our assumptions all can be determined once the model is fixed. We show how basic long time estimates and some limit results for perturbations can be derived in this setting such that these can be contrasted with the corresponding estimates from deterministic dynamics. The main complication is that the natural path-wise representation is generated by a counting measure with an intensity that depends nonlinearly on the state.


Introduction
The observation that detailed modeling of biochemical processes inside living cells is a close to hopeless task is a strong argument in favor of stochastic models. Such models are often thought to be more accurate than conventional rate-diffusion laws, yet remain more manageable than, say, descriptions formed at the level of individual molecules. Indeed, several studies [19,27,28] have showed that noisy models have the ability to capture relevant phenomena and to explain actual, observed dynamics.
In this work we shall consider some 'flow' properties of a stochastic dynamical system in the form of a quite general continuous-time Markov chain. Since the pioneering work of Gillespie [12,13], in the Systems Biology context this type of model is traditionally described in terms of a (chemical) master equation (CME). This is the forward Kolmogorov equation of a certain jump stochastic differential equation (jump SDE for brevity), driven by independent point processes with statedependent intensities. Despite the popularity of the master equation approach, little analysis on a per trajectory-basis of actual models has been attempted.
In the general literature, when discussing existence/uniqueness and various types of perturbation results, different choices of assumptions with different trade-offs have been made. One finds that the treatment often falls into one of two categories taking either a "mathematical" or a "physical" viewpoint. Either the conditions are highly general but with subsequently less transparent proofs and resulting in more abstract bounds. Or the conditions are formed out of convenience, say, involving global Lipschitz constants, and classical arguments carry through with only minor modifications.
Protter [26,Chap. V] offers a nice discussion from the mathematical point of view and in ascending order of generality, including the arguably highly unrestrictive assumption of locally Lipschitz continuous coefficients. Other authors [2,Chap. 6], [29, also treat the evolution of general jump-diffusion SDEs in continuous state spaces.
A study of the flow properties of jump SDEs is found in [23], where the setting is scalar and the state continuous. In [14] jump stochastic partial differential equations are treated, and existence/uniqueness results as well as ergodic results for the case of a multiplicative noise, are found in [22,21]. Numerical aspects in a similar setting are discussed in [11].
In a more applied context, stability is often thought of as implied from physical premises and the solution is tactically assumed to be confined inside some bounded region [17,Chap. V]. The fundamental issue here is that for open systems in a stochastic setting, there is a non-zero probability of reaching any finite state and global assumptions must be formed with great care. The analysis of open networks under an a priori assumption of boundedness is therefore quite difficult to interpret other than in a qualitative sense. Notable examples in this setting include time discretization strategies [15,20], time-parallel simulation techniques [7], and parameter perturbations [1, manuscript].
Evidently, essentially no systems of interest satisfy global Lipschitz assumptions since the fundamental interaction almost always takes the form of a quadratic term. Interestingly, for ordinary differential equations, it has been shown [18] that Lipschitz continuous coefficients imply a computationally polynomial-space complete solution; thus providing a kind of explanation for the convenience with this weak feedback assumption. It is also known [16], that with SDEs, superlinearly growing coefficients may in fact cause the forward Euler method to diverge.
1.1. Agenda. The purpose of this paper is to devise simple conditions that imply stability for finite and, in certain cases, infinite times, and that, when applied to systems of practical interest, yield explicit expressions for the associated stability estimates. As a result the framework developed herein applies in a constructive way to any chemical network, of arbitrary size and topology, formed by any combination of the elementary reactions (2.4)-(2.7) to be presented in Section 2. Additionally, it will be clear how to encompass also other types of nonlinear reactions that typically result from adiabatic simplifications.
As an argument in favor of this bottom-up approach one can note that, for evolutionary reasons, biochemical systems tend to operate close to critical points in phase-space where the efficiency is the highest. As we shall demonstrate in an explicit example to be worked out in Section 4.5, for such dynamical systems, an analysis by analogy might be highly misleading.
We also like to argue that our results are of interest from the modeling point of view. Due to the type of phenomenological arguments often involved, judging the relative effect of the (non-probabilistic) epistemic uncertainty is a fundamental issue which has so far not rendered a consistent analysis.

1.2.
Outline. Section 2 is devoted to formulating the type of processes we are interested in. We state the master equation as well as the corresponding jump SDE and we also look at some simple, yet informative actual examples. Since it is expected that the flow properties of the stochastic dynamics are somehow similar to those of the deterministic version, we search for a set of minimal assumptions in the latter setting in Section 3. Techniques for finding explicit values of the constants occurring among our assumptions are also devised. The main results of the paper are found in Section 4 where we put our theory together and prove existence and uniqueness results, as well as some basic perturbation estimates. The findings are finally applied to a qualitative analysis of a nonlinear model which displays an unusual sensitivity to parameter perturbations. A concluding discussion is found in Section 5.

Stochastic jump kinetics
In this section we start with the physicist's traditional viewpoint of pure jump processes and write down the governing master equations. These are evolution equations for the probability densities of continuous-time Markov chains over a discrete state space. Although the application considered here is mesoscopic chemical kinetics, identical or very similar stochastic models are also used in Epidemiology [5], Genetics [10] and Sociodynamics [8], to name just a few.
We then proceed with discussing a path-wise representation in terms of a stochastic jump differential equation. The reason the sample path representation is interesting is the possibility to reason about flow properties and thus compare functionals of single trajectories. This is generally not possible with the master equation approach.
For later use we conclude the section by looking at some prototypical models. A simple analysis shows, somewhat surprisingly, that an innocent-looking example produces second moments that grow indefinitely.
2.1. Reaction networks and the master equation. We consider a chemical network consisting of D different chemical species interacting according to R prescribed reaction pathways. At any given time t, the state of the system is an integer vector X(t) ∈ Z D + = {0, 1, 2, . . .} D counting the number of individual molecules of each species. A reaction law is a prescribed change of state with an intensity defined by a reaction propensity, w r : Z D + → R + . This is the transition probability per unit of time for moving from the state x to x − N r ; We specify such a reaction by the convenient notation where N r ∈ Z D is the transition step and is the rth column in the stoichiometric matrix N ∈ Z D×R . Informally, for states x(t) ∈ R D + , we can picture (2.1)-(2.2) as a stochastic version of the time-homogeneous ordinary differential equation where w(x) ≡ [w 1 (x), . . . , w R (x)] T is the column vector of reaction propensities.
The physical premises leading to a description in the form of discrete transition laws (2.2) often imply the existence of a system size V (e.g. physical volume or total number of individuals). For instance, in a given volume V the elementary chemical reactions can be written using the state vector with the names of the species in capitals. These propensities are generally scaled such that w r (x) = V u r (x/V ) for some dimensionless function u r . Intensities of this form are called density dependent and arise naturally in a number of situations [9,Chap. 11]. For the rest of this paper, we conveniently take V = 1 and defer system's size analysis to another occasion.
The models we consider here all have states in the positive integer lattice and the assumption that no transition can yield a state outside Z D + is therefore natural. We make this formal as follows [4,Chap. 8

.2.2, Definition 2.4]:
Assumption 2.1 (Conservation and stability). For all propensities, w r (x) = 0 for any x ∈ Z D + such that x − N r ∈ Z D + , and we also restrict initial data to Z D + . Furthermore, for all finite arguments x, we assume that w r (x) is finite.
To state the chemical master equation (CME), let for brevity p(x, t) = P(X(t) = x| X(0) = x 0 ) be the probability that a certain number x of molecules is present at time t conditioned upon an initial state X(0) = x 0 . The CME is then given by [17,Chap The convention of the transpose of the operator to the right of (2.8) is the standard mathematical formulation of Kolmogorov's forward differential system [4,Chap. 8.3] in terms of which M is the infinitesimal generator of the associated Markov process. This is also the adjoint of the master operator M T in the sense that (M T p, q) = (p, Mq) in the Euclidean inner product over the state space. An explicit representation is Under assumptions to be prescribed later on it follows that the dynamics of the expected value of some time-independent unknown function f , conditioned upon the initial state x 0 , can be written We now consider a path-wise representation for the stochastic process X t .
2.2. The sample path representation. In the context of analyzing models in stochastic chemical kinetics, the path-wise jump SDE representation seems to have been first put to use in [25, unpublished manuscript], and it was later further detailed in [20]. We thus assume the existence of a probability space (Ω, F, P) with the filtration F t≥0 containing R-dimensional Poisson processes. The state of the system X(t) ∈ Z D + will be constructed from a stochastic integral with respect to suitably chosen Poisson random measures.
The transition probability (2.2) defines a counting process π r (t) counting at time t the number of reactions of type r that has occurred since t = 0. It follows that these processes fully determine the state X(t), The counting processes are obtained from the transition intensities (cf. (2.1)) where by X(t−) we mean the value of the process prior to any transitions occurring at time t, and where the little-o notation is understood uniformly with respect to the state variable. Alternatively, using a random time change representation [9, Chap. 6.2], we can produce the counting process from a standard unit-rate Poisson process Π r , The marked counting measure [3, Chap. VIII] µ r (dt × dz; ω) with ω ∈ Ω defines an increasing sequence of arrival times τ i ∈ R + with corresponding "marks" z i ∈ I := [0, 1] according to some probability distribution which we will take to be uniform. The intensity m r (dt×dz) of µ r (dt×dz) is the Lebesgue measure scaled by the corresponding propensity, m r (dt×dz) = w r (X t− ) dt×dz. Using this formalism, (2.11) and (2.13) can be written in the jump SDE form (2.14) where µ = [µ 1 , . . . , µ R ] T . Here, the time τ − t to the arrival of the next reaction of type r is exponentially distributed with intensity w r (X t− ). Note that, by virtue of the nature of the propensities, the intensities of the counting processes therefore depends nonlinearly on the state [3, Chap. II.3].
Using that the point processes are independent and therefore have no common jump times [4, Chap. 8.1.3], we can obtain a more transparent notation in terms of a scalar counting measure. Define for this purpose and for any state x the cumulative intensities (2.15) such that the total intensity is given by W (x) ≡ W R (x). Let the marks z i be uniformly distributed on I. Then the frequency of each reaction can be controlled through a set of indicator functionsŵ r : Z D + × I → {0, 1} defined according tô Putŵ(x) ≡ [ŵ 1 (x; z), . . . ,ŵ R (x; z)] T and define also for later use the indicator formF (x; z) = −Nŵ(x; z), (2.17) such that The jump SDE (2.14) can now be written in terms of a scalar counting random measure µ through a state-dependent thinning procedure [6,Chap. 7.5], Eq. (2.19) expresses exponentially distributed reaction times that arrive according to a point process of intensity m(dt × dz) = W (X t− ) dt × dz carrying a mark which is uniformly distributed in I. This mark implies the ignition of one of the reaction channels according to the acceptance-rejection rule (2.16).
We will frequently decompose (2.19) into its "drift" and "jump" parts, The second term in (2.20) is driven by the compensated measure (µ − m) and is a local martingale provided in essence that the path is absolutely integrable (see [3, Chap. VIII.1, Corollary C4] for details).
2.2.1. Localization; Itô's and Dynkin's formulas. In analytic work it is often necessary to 'tame' the process by deriving results under a stopping time τ P := inf t≥0 { X t > P } in some norm. Results for the stopped process X t∧τ P can then be transferred to the original process by letting P → ∞ under suitable conditions.
Although there are many general versions of Itô's change of variables formula available in the setting of semi-martingales (see for example [29,Chap. 2.7] and [26, Chap. II.7]), we shall get around with the following simple version [2,Chap. 4.4.2]. By the properties of the semi-martingale pure jump process we have where the sum is over jump times s ∈ (0,t]. Using that X s = X s− − Nŵ(X s− ; z) we can write this in differential form as Alternatively, decomposing (2.21) into drift-and jump parts and taking expectation values we get, since the compensated measure is a local martingale, This is Dynkin's formula [4, Chap. 9.2.2] for the stopped process and we note that (2.10) is just a differential version.

The validity of the master equation.
With this much formalism developed, we may conveniently quote the following result: Since the governing equation (2.10) for the expected value of f (X t ) is a direct consequence of (2.8), we can similarly conclude the following: Under the assumptions of Theorem 2.1, and if, moreover, in an arbitrary norm · , In stating these results we have suppressed the conditional dependency on the initial state which we for simplicity consider to be some non-random state x 0 .

Concrete examples.
Consider the bi-molecular birth-death system, that is, the system is in contact with a large reservoir such that A-and B-molecules are emitted at a constant rate k 1 . Additionally, a decay reaction happens with probability k 2 per unit of time whenever two molecules meet. For this example we have the stoichiometric matrix which upon a moments consideration is just the same thing as the model that is, a constant intensity discrete random walk process. An explicit solution is the difference between two independent Poisson distributions, where N is a normally distributed random variable of the indicated mean and variance. Hence U t fluctuates between arbitrarily large and small values as t → ∞.

Reversible versions.
From time to time below we shall be concerned with the following closed version of (2.26), consisting of a single reversible reaction, This is clearly a finite system since the number a + b + 2c is always preserved. An open version of the same system is and will prove to be a useful example in the stochastic setting since formally, all states in Z 3 + are reachable. For (2.30a) we have These examples, while very simple to deal with, will provide good counterexamples in both Section 3 and 4.

Deterministic stability
In this section we shall be concerned with the deterministic drift part of the dynamics (2.20). We are interested in techniques for judging the stability of the time-homogeneous ODE (2.3), the so-called reaction rate equations implied by the rates (2.2). Stability and continuity with respect to initial data are considered in Sections 3.1 and 3.2, and techniques for explicitly obtaining all our postulated constants are discussed in Section 3.3.
Initially we will consider states x ∈ R D , but we will soon find it productive to restrict the treatment to x ∈ R D + . In order to remain valid also in the discrete stochastic setting, however, constructed counterexamples will remain relevant also when restricted to Z D + .
3.1. Stability. Many stability proofs can be thought of as comparisons with relevant linear cases. This is the motivation for the well-known Grönwall's inequality which we state in the following two versions.
The same conclusion holds irrespective of the differentiability of u but with α ≥ 0 and under the weaker integral condition Proof. The constant A can be disregarded by using the substitution v(t) = u(t) + A/α. For the differential version one integrates the given inequality using exp(−αt) as an integrating factor. For the integral version, define Under the assumed integral inequality we readily get The most immediate way of comparing the growth of solutions to the ODE (2.3) to those of a linear ODE is to require that the norm of the driving function is bounded in terms of its argument; since then by the triangle inequality, where Grönwall's inequality applies. Unfortunately, (3.3) is a too strict requirement for our applications.
The problem with the simple condition (3.3) is that it does not take the direction of growth into account; the offending quadratic propensity in (2.26) actually decreases the number of molecules. To deal with this, let x ∈ R D be an arbitrary vector defining an "outward" direction. The length of the composant of the driving function along this direction is (x, F (x)) and in order not to have x driven too strongly out along this ray we may, in view of Grönwall's inequality, naturally require that (x/ x , F (x)) ≤ constant × x for x sufficiently large. Equivalently, for any x, where Grönwall's inequality applies anew. The assumption (3.5) is weaker than (3.3) since the former implies the latter by the Cauchy-Schwarz inequality. Indeed, as in the proof of Proposition 3.2 it is readily checked that for the bi-molecular birth-death system (2.26), we get (x, F (x)) = k 1 (a + b) − k 2 (a + b)ab which this time readily can be bounded linearly in terms of Unfortunately, in the case of an infinite state space and strong dependencies between the species the assumption (3.5) is also often unrealistic. Proof. As in the proof of Proposition 3.2 we look at a ray The same argument applies also to (2.30b).
This negative result can perhaps best be appreciated as a kind of loss of information about the dependencies between the species in the functional form of the condition (3.5). The number of A-and B-molecules is strongly correlated with the number of C-molecules such that, in fact, in (2.30a) a + b + 2c is a preserved quantity. By contrast, in (3.5) the growth of x 2 is estimated from the sum of the growth of the individual elements of x as if they where independent.
A way around this limitation can be found provided that we leave the general case x ∈ R D . We therefore specify the discussion to the positive quadrant x ∈ R D + and assume from now on that it can be shown a priori that the initial data x 0 belongs to this set and that the subsequent trajectory x(t) never leaves it (compare Assumption 2.1).
It then follows that where 1 is the vector of length D containing all ones. This vector also defines a suitable "outward" vector for states x ∈ R D + since solutions to the ODE (2.3) cannot grow without simultaneously growing also in the direction of 1.
Again, in view of Grönwall's inequality Lemma 3.1, we tentatively require that (1, F (x)) ≤ constant × x 1 for x 1 sufficiently large. Equivalently, for any x, implying the bounded dynamics We remark in passing that the criterion (3.7) is sharp in the sense that if the reversed inequality can be shown to be true, then the growth of solutions can be estimated from below. Example 3.1. As a point in favor of this approach we compute for the bi-molecular birth-death system (2.26), (1, F (x)) = (1, −Nw(x)) = 2k 1 − 2k 2 ab which evidently falls under the assumption (3.7) with (A, α) = (2k 1 , 0). For the reversible case (2.30a) we similarly get (1, F (x)) = −k 1 ab + k 2 c such that (3.7) applies with (A, α) = (0, k 2 ). Finally, and in the same fashion, the open case (2.30b) is seen to be covered by letting (A, α) = (k 4 , (k 2 − k 3 ) ∨ 0).
The chosen "outward" vector 1 is by no means special. Clearly, any strictly positive vector l may be used in its place since x 1 and x (l) := l T x are equivalent norms over R D + . This is a general and useful observation as it may be used to discard parts of a system that are closed without any restrictions on the associated propensities.
Example 3.2. For the reversible system (2.30a), we have already noted that a + b + 2c is a conserved quantity such that the choice l = [1, 1, 2] T yields d/dt x (l) = 0. The open case (2.30b) also benefits from this weighted norm in that we get A slightly more involved model reads as follows: This example has been constructed such that the quadratic reaction increases x 1 and hence (3.7) does not apply. However, taking l = [3, 3, 2] T we get This example hints at a general technique for obtaining suitable candidates for the weight vector l. Simply form the matrix N 2 consisting of the columns of N that are affected by superlinear propensities. If a vector l > 0 annihilating these propensities exists, it can be found in the null-space of −N T 2 , readily available by linear algebra techniques. We omit the details.

3.2.
Continuity. For well-posedness of the ODE (2.3) we also need continuity with respect to the initial data. We cannot ask for uniform Lipschitz continuity since F (x) − F (y) ≤ L x − y clearly implies (3.3) which we have already refuted. For the same reason, a uniform one-sided Lipschitz condition (x − y, F (x) − F (y)) ≤ λ x − y 2 cannot be assumed to hold since it implies (3.5). The problem here is the global nature of the estimate and it therefore seems to be reasonable to localize this assumption. For instance, one might ask for (x − y, F (x) − F (y)) ≤ λ R x − y 2 whenever x , y ≤ R, (3.9) presumably with some growth restrictions on λ R . Although theoretically precise, such an analysis is likely to be somewhat inconvenient when it comes to estimating actual constants in perturbation results. We shall therefore consider the following version, where the form of λ R has been restricted to better suit the present purposes. Trivially, the norms · 1 and · are equivalent and hence the specific choice made in (3.10) is just a matter of convenience. Since the idea here is to use a priori bounds on x and y when deriving perturbation bounds, using · 1 (or · (l) ) is natural.
Theorem 3.4. Suppose that the ODE (2.3) satisfies (3.7) and (3.10) and that initial data x 0 ∈ R D + implies a trajectory x(t) ∈ R D + . Then for any t ∈ [0, T ] there is a unique solution x(t). Moreover, define C(t; x 0 , y 0 ) ≡ M + µ x t + y t 1 , where x t and y t are two trajectories associated with initial data x 0 , y 0 ∈ R D + , respectively. Then Proof. Combining (3.7) with Grönwall's inequality we get the a priori estimate Hence the (bounded) solution to is readily found through its integrating factor. The order estimate is a consequence of the fact that since the trajectory is continuous.  Proof. The first assertion is immediate since the smallest such constant M by definition is the logarithmic norm (see e.g. [30]). To compute µ 2 −N r q T r when q has the form indicated, we determine the extremal eigenvalue of −(N r q T r + q r N T r )/2. By the (signed) scaling invariance of the logarithmic norm we may without loss of generality take k ≡ 1. The spectral relation for an eigenpair (λ, z) can be written as − 1 2 N r,j z n = λz j , j = 1, 2, . . . , D, j = n, For non-zero λ the first relation can be solved for z j . When inserted into the second relation, using that z n = 0 (or otherwise z = 0), we get a quadratic equation for λ with a single extremal root.

Reaction
Bound Proof. Since S is symmetric we have x T Sx − y T Sy = (x + y) T S(x − y). Hence an explicit expression for µ is obtained as follows: The indicated upper bound is derived from the fact that |µ 2 [·] | ≤ · [30]. For the useful special case, define first the vector q = q 1 + q 2 in terms of q 1,j = kδ jm (x n + y n )/2, and q 2,j = kδ jn (x m + y m )/2. Using the fact that the logarithmic norm is sub-additive we can reuse the calculation in the proof of Proposition 3.5,

Reaction
Bound Example 3.7. As a highly prototypical example we consider the following natural extension of the bi-molecular birth-death model (2.26), where in this example it is informative to consider the dependence on the system's size V . It is straightforward to show the bounds (A, α) ≤ (2k 1 V, −k 3 ) in (3.7) and hence that the system is effectively bounded despite being of open character. This is seen from the fact that, for states x 1 ≥ 2k 1 /k 3 ·V , the dynamics is dissipative in the · 1 -norm. Furthermore, from Proposition 3.5 and 3.6 we get the sharp bounds (3.10). It follows that for states {x, y} such that x + y 1 9.7k 1 /k 2 · V 2 , the dynamics is contractive in the Euclidean norm. For density dependent propensities we expect that x ∼ V in any norm as V grows, and hence the region of contractivity grows in a relative sense. Intuitively one expects that these results offer an insight into the evolution of the process that is relevant also in the stochastic setting.

Stochastic stability
We now consider the properties of the stochastic jump SDE (2.19). For convenience we start by collecting all assumptions in Section 4.1. In the stochastic setting the requirements for existence and uniqueness are slightly stronger than in the deterministic case such that the one-sided bound (3.10) needs to be augmented with an unsigned version, implying the assumption of at most quadratically growing propensities. We demonstrate that this assumption is reasonable by constructing a model involving cubic propensities and with unbounded second moments. On the positive side we show in Section 4.2 that the assumptions are strong enough to guarantee finite moments of any order during finite time intervals.
We prove existence and uniqueness of solutions to the jump SDE (2.19) in Section 4.3. A sufficient condition for the existence of asymptotic bounds of the pth order moment is given in Section 4.4 where we also derive some basic perturbation estimates. With these in mind and as an illustration, we treat in Section 4.5 a model which displays a particularly dramatic response when subject to parameter perturbations.
4.1. Working assumptions. We state formally the set of assumptions on the jump SDE (2.19) as follows: We assume the parameters {A, L, λ, Γ, γ} to be positive but allow also negative values of {α, M, µ}.
The only unique assumptions here are (i) and (ii) since the latter clearly implies (iii)-(iv). However, as we saw in Section 3.3, in (iv) it is often possible to find sharper constants M and µ by considering this bound in isolation. We note also that although (ii) is stronger than (iv), it is in particular valid for quadratic propensities as can be seen from the representation used in the proof of Proposition 3.6,     Proof. Assume that both the second and the third moment are bounded for t ∈ [0, T ) with T > 0. From (2.10) we get the governing equation such that the growth of the second moment remains bounded only provided that the third moment remains finite. It is convenient to look at the cumulative third order moment. From (2.10), By the arithmetic-geometric mean inequality, We put u 3 = E C 3 (X t ) and get the differential inequality which can be integrated and rearranged to produce the bound .
Hence the third, and consequently also the second moment explode for some finite t whenever X 0 ≥ 3.
Interestingly, we note that if X 0 = 3, then the probability that the cubic decay transition occurs first is 1/3, and if this happens the state of the system will be stuck with a single molecule indefinitely.

Moment bounds.
In this section we consider general moment bounds derived from (2.23) using localization. To get some guidance, let us first assume that the differential form of Dynkin's formula (2.10) is valid. Since any trajectory (X t ) t≥0 by the basic Assumption 2.1 will belong to Z D + , we may use that X t 1 = (1, X t ). Hence from (2.10) with f (x) = (1, x) we get that by Assumption 4.1 (i). Clearly, the differential form of Grönwall's inequality in Lemma 3.1 applies here. A correct version of this argument unfortunately looses the sign of α.

Proposition 4.2. If Assumption 4.1 (i) is true, then
Here and below we shall make use of the stopping time τ P = inf t≥0 { X t 1 > P } and definet = t ∧ τ P .
Proof. From (2.23) with f (x) = (1, x) we get that By the integral form of Grönwall's inequality in Lemma 3.1 we deduce in terms of Y t := X t∧τ P that such that the same bound holds for X t by letting P → ∞.
We attempt a similar treatment for obtaining bounds in mean square. Assuming tactically that (2.10) is valid, writing x 2 2 = x T x we get after some work that We expect from Grönwall's inequality that E X t 2 grows at most exponentially with αt whenever However, this tentative condition is often violated in practice since the second term −x T Nw(x) = (x, F (x)), and since we already know from Proposition 3.3 that this quantity does not admit bounds in terms of x 2 even for very simple problems.
Surprisingly, more realistic conditions arise when looking for bounds with respect to X t The proof of Proposition 4.3 follows the same pattern as for Proposition 4.2, but using this time f (x) = x 2 1 = (1, x) 2 in (2.23). The condition (4.8) is typically more realistic than (4.7) since we recognize the term −1 T Nw(x) x 1 = (1, F (x)) x 1 , which under the evidently reasonable Assumption 4.1 (i) is ≤ (A + α x 1 ) x 1 . It follows that if (1 T N) 2 w(x) grows at most quadratically with x 1 , then this assumption is sufficient to yield bounds in mean square. Stated formally, Proof. This is straightforward: we get by the assumptions and Hölder's inequality, where an application of Young's inequality yields the indicated bounds.
As a strong point in favor of our running assumptions we now demonstrate that the above reasoning can be generalized: the condition sufficient for 1st order stability paired with quadratic propensities implies finite time stability in any order moment. with β + = β ∨ 0, and bounds β ≤ (p − 1 + pα) Expanding and using Young's inequality with exponent p/(p − 1) and conjugate exponent p, respectively, we find that We thus get where Grönwall's integral inequality applies. Letting P → ∞ we obtain the result.
As demonstrated in Examples 3.2 and 3.3, the vector 1 is not always the best choice for a system's "outward" direction. In such cases we note that the entire statement of Theorem 4.5, including the proof, holds verbatim if a suitably normalized vector l > 0 is used in place of 1 and the norm · 1 is replaced with the equivalent norm · (l) .

Existence and uniqueness.
We shall now prove that the jump SDE (2.19) under Assumption 4.1 has a uniquely defined and locally bounded solution. To this end and following [29, Sect. 3.1.2], we introduce the following space of path-wise locally bounded processes: Proof. As before we use the stopping time τ P = inf t≥0 { X t 1 > P } and put t = t ∧ τ P . We get from Itô's formula Since the propensities are bounded for bounded arguments (Assumption 2.1), using the stopping time we find that the jump part is absolutely integrable and hence a local martingale Mt. We estimate its quadratic variation, Taking supremum and expectation values we get from Burkholder's inequality [26,Chap. IV.4] with some constants C 3 , By Grönwall's integral inequality we have thus shown that E X 2 t∧τ P can be bounded in terms of the initial data and time t. The result now follows by letting P → ∞ and using Fatou's lemma.
We note in passing that Theorem 4.6 can readily be generalized to solutions in the space S p,loc F (Z D + ) for any p ≥ 1. We shall be using the observation that, for x ∈ Z D + , we have that x 1 ≤ x 2 2 (referred below to as the "integer inequality").
Proof. Under the same stopping time as before we get from Itô's formula after discarding the martingale part From the integer inequality we find that there is a constant C ≥ 0 depending on P such that Using that X 0 = Y 0 and Grönwall's integral inequality we conclude that the only solution is the zero solution. Letting P → ∞ and using Fatou's lemma the statement is therefore proved.

4.4.
Basic perturbation estimates. We next aim at deriving some stability estimates with respect to perturbations in initial data and of the coefficients. To this effect we shall need a slightly more general Grönwall-type estimate than before.  The same conclusion holds with β ≥ 0 and under the weaker integral condition Proof. Performing the differentiation and dividing through by u we get the implied inequality u ≤ α(t)/2+β/2 u. This inequality has an integrating factor exp(−β/2t) and is therefore readily solved. To prove the second part, define We get under the assumptions that w (t) ≤ β/2 w(t) such that w(t) ≤ 0.
Note the special case of the lemma α(t) = 1, β = 0, and u(0) = 0, implying the (sharp) bound u ≤ t/2. This observation shows that the integer inequality as used in the proof of Theorem 4.7 is crucial for path-wise uniqueness; without it the integral inequality (4.12) admits growing solutions.
Although Theorem 4.5 shows that any moments are bounded in finite time, a relevant question from the modeling point of view is whether the first few moments remain bounded indefinitely. We give a result to this effect which relies on the existence of solutions in S 2,loc F (Z D + ) and the assumption of at most quadratic growth. Under these conditions the differential form (2.10) may be used (cf. Corollary 2.2) such that the differential Grönwall inequality applies.
Then E X t p 1 is asymptotically bounded. Proof. We omit the case p = 1 since it follows from (4.3) under the present assumptions. The idea of the proof is to asymptotically bound E(C + X t 1 ) p with a certain positive constant C = C(p) to be decided upon below. By (2.10) we get with Z t = C + X t 1 , where Taylor's formula was used and where θ r ∈ [0, 1] are constants. Using the assumptions we get the bound 16) here: this estimate is one-sided in X t p 1 − X 0 p 1 as it relies on the signed Assumption 4.1 (i). To get a two-sided version we proceed by integrating (2.10), by the mean-value theorem, Assumption 4.1 (iii), and various simple estimates which we omit for brevity. Using the equivalence of norms, a very similar proof is readily obtained for M p 2 .
Theorem 4.11 (Initial data perturbation). Let Assumption 4.1 (i)-(iv) hold true and let two solutions of (2.19) X t and Y t be given, both generated by the same counting measure but with different initial data. Put Σ 0 := X 0 + Y 0 1 . Then Proof. Our starting point is the following differential version of (4.12), (4.22) in terms of Using the properties of the 1-norm over Z D + we readily get by Lemma 4.10. The term R 1 needs slightly more work. Subtracting and using the previous bound we get by the Cauchy-Schwartz inequality. For brevity, putX t := X t − X 0 andȲ t := Y t − Y 0 . By the properties of the inner product we have that using the integer inequality. Inserting this into (4.23) and simplifying a bit we get where Lemma 4.10 was used. Starting from (4.22), using the Cauchy-Schwartz inequality several times and combining the bounds for R 1 and R 2 we infer that For the lower order terms we have used the observation that by path-wise uniqueness, we may introduce the indicator function according to Without this variant of the integer inequality there would be a growing deviation between the two trajectories even with identical initial data. The theorem now follows as an application of Lemma 4.8 with u(t) 2 We next consider perturbations in the reaction coefficients. Given the linear dependence on the coefficients k r , r = 1 . . . 4 in the elementary reactions (2.4)-(2.7) a suitable model seems to be that a perturbation k r → k r + δ r in a propensity w r (x) spreads linearly in a relative sense, |w r (x, k r ) − w r (x, k r + δ r )| ≤ constant × δ r w r (x, k r ). (4.24) We make this formal by simply requiring that where δ is a suitable measure of the total perturbation vector and where the perturbed propensity vector function is given by We conveniently assume that the entire statement of Assumption 4.1 carries over to the perturbed system, and for convenience we shall also assume that all constants are the same. By the triangle inequality and Assumption 4.1 (ii) and (iii) we have the bound w(x) − w δ (y) 1 ≤ w(x) − w(y) 1 + w(y) − w δ (y) 1 ≤ (L + λ x + y 1 ) x − y + δ(Γ + γ y 2 1 ). (4.27) Also, induced from (4.25), it will be convenient to define δ F to be the smallest constant such that We shall need a slight extension of Lemma 4.8 as follows. Consider the differential inequality d/dt u(t) 2 ≤ α(t)u+βu 2 +γ(t) with α(t) and γ(t) non-negative functions, but again β a signed constant. Using the integrating factor we obtain  We also introduce a convenient notation to keep track of estimates involving exponentials and define exp + (αt) = exp(0 ∨ αt) as a majorant to exp(αt). For t ≥ 0 and f positive and non-decreasing we get the convenient rule t 0 f (s) exp + (αs) ds ≤ f (t) t exp + (αt). (4.31) This notation behaves intuitively in most cases, a notable exception being when mixing it with ordinary exponentials, e.g., exp(αt) exp + (−αt) = exp + (αt). Another example is that (4.30) takes the compact form u(t) ≤ exp + (β/2t) γ(t) 1/2 t 1/2 + α(t) 2 t , (4.32) provided that γ and α are non-decreasing and that u(0) = 0. This form is used in the proof below.
Theorem 4.12 (Coefficient perturbation). Let two solutions of (2.19) X t and Y t be given, both generated by the same counting measure and with the same initial data, but with the propensities for Y t perturbed by δ as indicated in (4.25)-(4.26). Put W 0 := Γ + γ X 0 2 1 . Then, firstly, there is the "small perturbation", or "short time" estimate When t is large a generally sharper estimate is where L and λ are defined in Theorem 4.11, and similarly, δ = 1 T N 2 ∞ δ.
The first estimate (4.33) is theoretically the most important one since it shows continuity as δ → 0.
Proof. We start from a differential version of (4.11), but taking the perturbation of the coefficients into account, after using the Cauchy-Schwartz inequality and the bounds (4.27) and (4.28), respectively. In the proof of Theorem 4.11 we already have constructed bounds for two of the terms in the above expression. For the other two terms we use firstly from Propositions 4.3 and 4.4 that by Lemma 4.10. Using this and the Cauchy-Schwartz inequality we see that The asymptotically sharper estimate (4.34) now follows as an application of the extended Grönwall's inequality as expressed in (4.32). The short time estimate (4.33) is a result of using the integer inequality rather than the Cauchy-Schwartz inequality, and then applying (4.32). This has the effect of transferring all terms which do not go to zero with the perturbation δ into the exponent and is therefore sharper when δ and/or t are small.
where we may think of E as an enzyme and C an intermediate complex.
We combine (4.36) with and enforce C := E[C ∞ ] = 10 = E[E ∞ ] =: E at steady-state (note that E = α E /β E ). We choose units of time such that β C ≡ 1 and use the same value of β E . We also conveniently determine α C from the approximation C α C /(β C +k E ). To close the model we make the choice k = 10 2 since a relatively large value will increase the sensitivity to parameter perturbations (by Table 3.2 we have µ ≤ k/4 in Assumption 4.1 (iv), implying a large exponent in both (4.33) and (4.34)).
The chemical network thus constructed in a 'bottom-up' fashion is very nearly the same as the one considered in [24]. The main difference is the interpretation of (4.36), which in [24] is understood to be inhibitive; hence the final product is obtained from the intermediate complex by maturation.
Consider now the effect in C of a perturbation in the amount of enzyme E. Specifically, we perturb the system according to α E → α E (1 − δ) := α E /2 such that by linearity of (4.37), E gets reduced by a factor of 1/2. In an attempt to estimate the effect of this perturbation, one might consider to effectively linearize the model by replacing (4.36) with thereby also making the dynamics one-dimensional. In Figure 4.1 we investigate the difference between the nonlinear and linear models by solving numerically the deterministic rate equation (2.3) (which is an exact equation in the linear case). Although the latter model reacts quicker, the final response is in both cases to increase the amount of complex C by a factor of 2, which is what is also intuitively expected. However, one can show using the techniques in Propositions 3.5 and 3.6 that the original system (4.36)-(4.37) satisfies the bounds (M, µ) ≤ (1, 25), while the linear version (4.37)-(4.38) admits (M, µ) ≤ (−1001, 0) in Assumption 4.1 (iv). Although in the former case the actual figures might be overly pessimistic, the contractive character of the latter clearly hints that the linearized model cannot be used to predict the sensitivity of the original model. This is confirmed in Figure 4.2 where the result of stochastic simulations of (4.36)-(4.37) is displayed. Not only is the initial response faster than observed in the nonlinear rate equations in Figure 4.1, the final response is also much larger (by a factor of two). The results are even more dramatic when, as in Theorem 4.12, one looks at the RMS difference between C(t) and the perturbed trajectory C δ (t) (cf.   Although for this example the estimates in Theorem 4.12 yield a qualitative insight, they are also very crude compared to the observed results. Since we have already commented that the corresponding estimates in the deterministic case are quite similar under the present assumptions, this holds true in the plain ODE setting as well. To improve on this 'worst case' character of the estimates it seems difficult to do more than careful studies on a case-by-case basis.

Conclusions
We have proposed a theoretical framework consisting of a priori assumptions and estimates for problems in stochastic chemical kinetics. The assumptions are strong enough to guarantee well-posedness for a large and physically relevant class of problems. Explicit upper bounds for the effect of perturbations in initial data and in rate constants have been obtained. The assumptions are constructive in the sense that explicit techniques for obtaining all postulated constants have either been worked out in detail or at least indicated.
In the course of motivating our setup we have seen that most problems do not admit global Lipschitz constants and that one-sided versions do not provide a better alternative. Another conclusion worth highlighting is that it pays off to consider jump SDEs in a fully discrete setting in that there are potential complications in proving path-wise uniqueness when the state space is continuous. A practical implication is that some care should be exercised when forming continuous approximations to these types of jump SDEs.
It should be clear that we have not considered all possible perturbation results. For instance, bounds of the form E sup s≤t · were restricted to mere existence in the 2nd order moment. Also, ergodicity has not been discussed at all although we were able to show under certain conditions that the systems admit asymptotically bounded moments.
For future work we intend to re-visit certain classical results from the perspective of the framework developed herein; for example, thermodynamic limit results, time discretization strategies, and quasi-steady state approximations -all of which have a practical impact in a range of applications.