Domain Decomposition of an Optimal Control Problem for Semi-Linear Elliptic Equations on Metric Graphs with Application to Gas Networks

We consider optimal control problems for the flow of gas in a pipe network. The equations of motions are taken to be represented by a semi-linear model derived from the fully nonlinear isothermal Euler gas equations. We formulate an optimal control problem on a given network and introduce a time discretization thereof. We then study the well-posedness of the corresponding time-discrete optimal control problem. In order to further reduce the complexity, we consider an instantaneous control strategy. The main part of the paper is concerned with a non-overlapping domain decomposition of the semi-linear elliptic optimal control problem on the graph into local problems on a small part of the network, ultimately on a single edge.

Share and Cite:

Leugering, G. (2017) Domain Decomposition of an Optimal Control Problem for Semi-Linear Elliptic Equations on Metric Graphs with Application to Gas Networks. Applied Mathematics, 8, 1074-1099. doi: 10.4236/am.2017.88082.

1. Introduction

1.1. Modeling of Gas Flow in a Single Pipe

The Euler equations are given by a system of nonlinear hyperbolic partial differential equations (PDEs) which represent the motion of a compressible non-viscous fluid or a gas. They consist of the continuity equation, the balance of moments and the energy equation. The full set of equations is given by (see     ). Let $\rho$ denote the density, $v$ the velocity of the gas and $p$ the pressure. We further denote $\lambda$ the friction coefficient of the pipe, $D$ the diameter, $a$ the cross section area. The variables of the system are $\rho$ , the flux

$q=a\rho v$ . We also denote $c$ the the speed of sound, i.e. ${c}^{2}=\frac{\partial p}{\partial \rho }$ (for constant

entropy). For natural gas we have 340 m/s. In particular, in the subsonic case ( $|v| ), the one which we consider in the sequel, two boundary conditions have to be imposed, one on the left and one on the right end of the pipe. We consider here the isothermal case only. Thus, for horizontal pipes

$\begin{array}{l}\frac{\partial \rho }{\partial t}+\frac{\partial }{\partial x}\left(\rho v\right)=0\\ \frac{\partial }{\partial t}\left(\rho v\right)+\frac{\partial }{\partial x}\left(p+\rho {v}^{2}\right)=-\frac{\lambda }{2D}\rho v|v|.\end{array}$ (1)

In the particular case, where we have a constant speed of sound $c=\sqrt{\frac{p}{\rho }}$ , for small velocities $|v|\ll c$ , we arrive at the semi-linear model

$\begin{array}{l}\frac{\partial \rho }{\partial t}+\frac{\partial }{\partial x}\left(\rho v\right)=0\\ \frac{\partial }{\partial t}\left(\rho v\right)+\frac{\partial p}{\partial x}=-\frac{\lambda }{2D}\rho v|v|.\end{array}$ (2)

1.2. Network Modeling

Let $G=\left(V,E\right)$ denote the graph of the gas network with vertices (nodes) $V=\left\{{n}_{1},{n}_{2},\cdots ,{n}_{|V|}\right\}$ an edges $E=\left\{{e}_{1},{e}_{2},\cdots ,{e}_{|E|}\right\}$ . Node indices are denoted $j\in \mathcal{J},|\mathcal{J}|=|V|$ , while edges are labelled $i\in \mathcal{I},|\mathcal{I}|=|E|$ . For the sake of uniqueness, we associate to each edge a direction. Accordingly, we introduce the edge-node incidence matrix

${d}_{ij}=\left\{\begin{array}{l}-1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{node}\text{\hspace{0.17em}}{n}_{j}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{left}\text{\hspace{0.17em}}\text{node}\text{\hspace{0.17em}}\text{of}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{edge}\text{\hspace{0.17em}}{e}_{i},\\ +1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{node}\text{\hspace{0.17em}}{n}_{j}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{right}\text{\hspace{0.17em}}\text{node}\text{\hspace{0.17em}}\text{of}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{edge}\text{\hspace{0.17em}}{e}_{i},\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{else}.\end{array}$

In contrast to the classical notion of discrete graphs, the graphs considered here are known as metric graphs, in the sense, that the edges are continuous curves. In fact, we consider here straight edges, along which differential equations hold. The pressure variables ${p}_{i}\left({n}_{j}\right)$ coincide for all edges incident at node ${n}_{j}$ , i.e. $i\in {\mathcal{I}}_{j}:=\left\{i\in 1,\cdots ,E|{d}_{ij}\ne 0\right\}$ . We express the transmission conditions at the nodes in the following way. We introduce the edge degree ${d}_{j}:=|{\mathcal{I}}_{j}|$ . We distinguish now between multiple nodes ${n}_{j}$ , where ${d}_{j}>1$ , which we denote ${\mathcal{J}}^{M}$ , whereas for simple nodes ${n}_{j}$ , for which ${d}_{i}=1$ , we write ${\mathcal{J}}^{S}$ . The set of simple nodes decomposes then into those simple nodes, where Dirichlet conditions hold ${\mathcal{J}}_{D}^{S}$ and Neumann nodes ${\mathcal{J}}_{N}^{S}$ . Then the continuity conditions read as follows

${p}_{i}\left({n}_{j},t\right)={p}_{k}\left({n}_{j},t\right),\text{\hspace{0.17em}}\forall i,k\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M},\text{\hspace{0.17em}}t\in \left(0,T\right).$ (3)

The nodal balance equation for the fluxes can be written as in instant of the classical Kirchhoff-type condition

$\underset{i\in {\mathcal{I}}_{j}}{\sum }{d}_{ij}{q}_{i}\left({n}_{j},t\right)=0,\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M},\text{\hspace{0.17em}}t\in \left(0,T\right).$ (4)

From the considerations above we conclude the following system of semi- linear hyperbolic equations on the metric graph $G$ :

$\begin{array}{l}{\partial }_{t}{p}_{i}\left(x,t\right)+\frac{{c}_{i}^{2}}{{a}_{i}}{\partial }_{x}{q}_{i}\left(x,t\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}i\in \mathcal{I},\text{\hspace{0.17em}}\left(x,t\right)\in \left(0,{\mathcal{l}}_{i}\right)×\left(0,T\right)\\ {\partial }_{t}{q}_{i}\left(x,t\right)+{\partial }_{x}{p}_{i}\left(x,t\right)=-\frac{\lambda {c}_{i}^{2}}{2{D}_{i}{a}_{i}^{2}}\frac{{q}_{i}\left(x,t\right)|{q}_{i}\left(x,t\right)|}{{p}_{i}\left(x,t\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i\in \mathcal{I},\text{\hspace{0.17em}}\left(x,t\right)\in \left(0,{\mathcal{l}}_{i}\right)×\left(0,T\right)\\ {h}_{j}\left({p}_{i}\left({n}_{j},t\right),{q}_{i}\left({n}_{j},t\right)\right)={u}_{j}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{S},\text{\hspace{0.17em}}t\in \left(0,T\right)\\ {p}_{i}\left({n}_{j},t\right)={p}_{k}\left({n}_{j},t\right),\text{\hspace{0.17em}}\forall i,k\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M},\text{\hspace{0.17em}}t\in \left(0,T\right)\\ \underset{i\in {\mathcal{I}}_{j}}{\sum }{d}_{ij}{q}_{i}\left({n}_{j},t\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M},\text{\hspace{0.17em}}t\in \left(0,T\right)\\ {p}_{i}\left(x,0\right)={p}_{i,0}\left(x\right),\text{\hspace{0.17em}}{q}_{i}\left(x,0\right)={q}_{i0}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}.\end{array}$ (5)

To the best knowledge of the authors, for problem (5), no published result seems to be available.

2. Optimal Control Problems and Outline

We are now in the position to formulate optimal control problems on the level of (5). There are currently two different approaches towards optimizing and/or control the flow of gas flow through pipe networks. The first one aims at optimizing decision variables such as on-off-states for valves and compressors or zero-full-supply and demand variables for input and exit nodes, respectively. Valves and compressors can be modelled as transmission conditions at a serial node. We refer to    and refrain in the sequel from discussing issues of valves and compressors. The combined discrete and continuous optimization will be the subject of a forthcoming publication. We now describe the general format for an optimal control problem associated with the semi-linear model equations.

$\begin{array}{l}\underset{\left(p,q,u\right)\in \Xi }{min}I\left(p,q,u\right):=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{T}{\int }}\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{I}_{i}\left({p}_{i},{q}_{i}\right)\text{d}x\text{d}t+\frac{\nu }{2}\underset{j\in {\mathcal{J}}^{S}}{\sum }\underset{0}{\overset{T}{\int }}{|{u}_{j}\left(t\right)|}^{2}\text{d}t\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}s.t.\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(p,q,u\right)\text{\hspace{0.17em}}\text{satisfies}\text{\hspace{0.17em}}\left(\text{5}\right),\end{array}$ (6)

$\Xi :=\left\{\left(p,q,u\right):{\underset{_}{p}}_{i}\le {p}_{i}\le {\stackrel{¯}{p}}_{i},\text{\hspace{0.17em}}{\underset{_}{q}}_{i}\le {q}_{i}\le {\stackrel{¯}{q}}_{i},\text{\hspace{0.17em}}{\underset{_}{u}}_{i}\le {u}_{i}\le {\stackrel{¯}{u}}_{i},\text{\hspace{0.17em}}i\in \mathcal{I}\right\}.$ (7)

In (6), $\nu >0$ is a penalty parameter and ${I}_{i}\left(\cdot ,\cdot \right)$ a continuous function on the pairs $\left(p,q\right)$ . In (7), the quantities ${\underset{_}{p}}_{i},{\underset{_}{q}}_{i},{\stackrel{¯}{p}}_{i},{\stackrel{¯}{q}}_{i}$ are given constants that determine the feasible pressures and flows in the pipe $i$ , while ${\underset{_}{u}}_{i},{\stackrel{¯}{u}}_{i}$ describe control constraints. In the continuous-time case the inequalities are considered as being satisfied for all times and everywhere along the pipes. In the sequel, we will not consider control constraints and state-constraints and, moreover, even reduce to a time semi-discretization.

Time Discretization

We now consider the time discretization of (5) such that $\left[0,T\right]$ is decomposed into break points ${t}_{0}=0<{t}_{1}<\cdots <{t}_{N}=T$ with widths $\Delta {t}_{n}:={t}_{n+1}-{t}_{n},n=0,\cdots ,N-1$ . Accordingly, we denote ${p}_{i}\left(x,{t}_{n}\right)=:{p}_{i,n}\left(x\right),{q}_{i}\left(x,{t}_{n}\right)=:{q}_{i,n}\left(x\right),n=0,\cdots ,N-1$ . We consider a mixed implicit-explicit Euler scheme which takes ${p}_{i}$ in the friction term in an explicit manner.

$\begin{array}{l}\frac{1}{\Delta t}{p}_{i,n+1}\left(x\right)+\frac{{c}_{i}^{2}}{{a}_{i}}{\partial }_{x}{q}_{i,n+1}\left(x\right)=\frac{1}{\Delta t}{p}_{i,n}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ \frac{1}{\Delta t}{q}_{i,n+1}\left(x\right)+{\partial }_{x}{p}_{i,n+1}\left(x\right)\\ =-\frac{\lambda {c}_{i}^{2}}{2{D}_{i}{a}_{i}^{2}}\frac{{q}_{i,n+1}\left(x\right)|{q}_{i,n+1}\left(x\right)|}{{p}_{i,n}\left(x\right)}+\frac{1}{\Delta t}{q}_{i,n}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {g}_{j}\left({p}_{i,n+1}\left({n}_{j}\right),{q}_{i,n+1}\left({n}_{j}\right)\right)={u}_{j,n+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{S}\\ {p}_{i,n+1}\left({n}_{j}\right)={p}_{k,n+1}\left({n}_{j}\right),\text{\hspace{0.17em}}\forall i,k\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M}\\ \underset{i\in {\mathcal{I}}_{j}}{\sum }{d}_{ij}{q}_{i,n+1}\left({n}_{j}\right)=0,\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M},\\ {p}_{i,0}\left(x\right)={p}_{i,0}\left(x\right),\text{\hspace{0.17em}}{q}_{i,0}\left(x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={q}_{i0}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}.\end{array}$ (8)

We then obtain the optimal control problem on the time-discrete level:

$\begin{array}{l}\underset{\left(p,q,u\right)}{min}I\left(p,q,u\right):=\underset{i\in \mathcal{I}}{\sum }\underset{n=1}{\overset{N}{\sum }}\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{I}_{i}\left({p}_{i,n},{q}_{i,n}\right)\text{d}x+\frac{\nu }{2}\underset{j\in {\mathcal{J}}^{S}}{\sum }\underset{n=1}{\overset{N}{\sum }}{|{u}_{j}\left(n\right)|}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}s.t.\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(p,q,u\right)\text{\hspace{0.17em}}\text{satisfies}\text{\hspace{0.17em}}\left(8\right).\end{array}$ (9)

In (9), we consider edgewise given cost functions e.g.

${I}_{i}\left({p}_{i,n},{q}_{i,n}\right)\left(x\right):=\frac{{\kappa }_{i}}{2}\left\{{|{p}_{i,n}\left(x\right)-{p}_{i,n}^{d}\left(x\right)|}^{2}+{|{q}_{i,n}\left(x\right)-{q}_{i,n}^{d}\left(x\right)|}^{2}\right\},x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}.$

It is clear that (9) involves all time steps in the cost functional. We would like to reduce the complexity of the problem even further. To this aim we consider what has come to be known as instantaneous control. This amounts to reducing the sums in the cost function of (9) to the time-level ${t}_{n+1}$ . This strategy has is known as rolling horizon approach, the simplest case of the moving horizon paradigm, see e.g.   . Thus, for each $n=1,\cdots ,N-1$ and given ${p}_{i,n},{q}_{i,n}$ , we consider the problems

$\begin{array}{l}\underset{\left(p,q,u\right)}{min}I\left(p,q,u\right):=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{I}_{i}\left({p}_{i},{q}_{i}\right)\text{d}x+\frac{\nu }{2}\underset{j\in {\mathcal{J}}^{S}}{\sum }{|{u}_{j}|}^{2}\\ \text{\hspace{0.17em}}s.t.\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(p,q,u\right)\text{\hspace{0.17em}}\text{satisfies}\text{\hspace{0.17em}}\left(8\right)\text{\hspace{0.17em}}\text{at}\text{\hspace{0.17em}}\text{time}\text{\hspace{0.17em}}\text{level}\text{\hspace{0.17em}}n+1.\end{array}$ (10)

It is now convenient to discard the actual time level $n+1$ and redefine the states at the former time as input data. To this end, we introduce ${\alpha }_{i}:=\frac{1}{\Delta t}$ , ${f}_{i}^{1}:=\frac{1}{\Delta t}{p}_{i,n}\left(x\right)$ , ${f}_{i}^{2}:=\frac{1}{\Delta t}{q}_{i,n}\left(x\right)$ , ${\gamma }_{i}\left(x\right):=\frac{\lambda {c}_{i}^{2}}{2{D}_{i}{a}_{i}^{2}}\frac{1}{{p}_{i,n}\left(x\right)}$ and rewrite (8) as

$\begin{array}{l}{\alpha }_{i}{p}_{i}\left(x\right)+\frac{{c}_{i}^{2}}{{a}_{i}}{\partial }_{x}{q}_{i}\left(x\right)={f}_{i}^{1},\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {\alpha }_{i}{q}_{i}\left(x\right)+{\partial }_{x}{p}_{i}\left(x\right)+{\gamma }_{i}\left(x\right){q}_{i}\left(x\right)|{q}_{i}\left(x\right)|={f}_{i}^{2},\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {g}_{j}\left({p}_{i}\left({n}_{j}\right),{q}_{i}\left({n}_{j}\right)\right)={u}_{j},\text{\hspace{0.17em}}i\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{S}\\ {p}_{i}\left({n}_{j}\right)={p}_{k}\left({n}_{j}\right),\text{\hspace{0.17em}}\forall i,k\in {\mathcal{I}}_{j},\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M}\\ \underset{i\in {\mathcal{I}}_{j}}{\sum }{d}_{ij}{q}_{i}\left({n}_{j}\right)=0,\text{\hspace{0.17em}}j\in {\mathcal{J}}^{M}.\end{array}$ (11)

We now differentiate the first equation in (11) with respect to $x$ and insert the result into the second equation. After renaming ${f}_{i}:={f}_{i}^{2}-\frac{1}{{\alpha }_{i}}{\partial }_{x}{f}_{i}^{1}$ , $\mathcal{I}=\left\{i=1,\cdots ,m\right\}$ and introducing ${\beta }_{i}=\frac{{c}_{i}^{2}}{{a}_{i}{\alpha }_{i}}$ , we consider the semi-linear elliptic problem on the graph $G$ with Neumann controls at simple nodes.

$\begin{array}{l}{\alpha }_{i}{q}_{i}\left(x\right)-{\beta }_{i}{\partial }_{xx}{q}_{i}\left(x\right)+{g}_{i}\left(x;{q}_{i}\left(x\right)\right)={f}_{i}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),i\in \mathcal{I}\\ {\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\\ {q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{q}_{i}\left({n}_{k}\right)={u}_{k},\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ \underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M},\end{array}$ (12)

where we set ${g}_{i}\left(x;s\right):={\gamma }_{i}\left(x\right)s|s|$ . We then consider in the rest of the paper the following optimal control problem:

$\begin{array}{l}\underset{\left(p,q,u\right)}{\mathrm{min}}I\left(p,q,u\right):=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{I}_{i}\left({p}_{i},{q}_{i}\right)\text{d}x+\frac{\nu }{2}\underset{j\in {\mathcal{J}}^{S}}{\sum }{|{u}_{j}|}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}s.t.\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(p,q,u\right)\text{\hspace{0.17em}}\text{satisfies}\text{\hspace{0.17em}}\left(12\right).\end{array}$ (13)

Example 1. Before we embark on the non-overlapping domain decomposition method in the context of the instantaneous control paradigm (13), we look into the situation of a star graph with a central multiple node and $m$ edges. We arrange the graph such that the central node is located at $x=0$ for all right ends of the edges and $x={\mathcal{l}}_{i}$ at the left ends of the edges. This is the situation that we consider in our examples. We assumed that the first edge satisfies a homogeneous Dirichlet condition at $x={\mathcal{l}}_{1}$ and controlled Neumann conditions at $x={\mathcal{l}}_{i}$ . We obtain, accordingly

$\begin{array}{l}{\alpha }_{i}{q}_{i}\left(x\right)-{\beta }_{i}{\partial }_{xx}{q}_{i}\left(x\right)+{\alpha }_{i}{\gamma }_{i}\left(x\right){q}_{i}\left(x\right)|{q}_{i}\left(x\right)|={f}_{i}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),i=1,\cdots ,m\\ {\beta }_{i}{\partial }_{x}{q}_{i}\left(0\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left(0\right),\text{\hspace{0.17em}}i\ne j=1,\cdots ,m\\ \underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{q}_{i}\left(0\right)=0,\text{\hspace{0.17em}}{q}_{1}\left({\mathcal{l}}_{1}\right)=0,\text{\hspace{0.17em}}{\partial }_{x}{q}_{i}\left({\mathcal{l}}_{i}\right)={u}_{i}\text{\hspace{0.17em}}i=2,\cdots ,m.\end{array}$ (14)

3. Domain Decomposition

We provide an iterative non-overlapping domain decomposition that can be interpreted as an Uzawa method (Alg3, in the sense of Glowinski). See the monograph  for details. The idea for this algorithm originates from a decoupling of the transmission conditions. To this end, we define the flux vector ${q}^{k}:={\left({d}_{ik}{q}_{i}\left({n}_{k}\right),\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k}\right)}^{\text{T}}$ and the pressure vectors $\partial {q}^{k}:={\left({\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right),\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k}\right)}^{\text{T}}$ at a given node ${n}_{k},k\in {\mathcal{J}}^{M}$ . Given a vector $z:=\left({z}_{i},i\in {\mathcal{I}}_{k}\right)$ , we define

${\left({\mathcal{S}}^{k}\left(z\right)\right)}_{i}:=\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{z}_{j}-{z}_{i}.$ (15)

Then ${\left({S}^{k}\right)}^{2}=I$ and ${S}^{k}\left(1\right)=1$ for $1:=\left(1,\cdots ,1\right)\in {ℝ}^{{d}_{k}}$ . With this notation, the general concept is easily established. We set for any $\sigma >0$ :

${q}^{k}+\sigma \partial {q}^{k}=\sigma {\mathcal{S}}^{k}\left(\partial {q}^{k}\right)-{\mathcal{S}}^{k}\left({q}^{k}\right).$ (16)

Applying ${\mathcal{S}}^{k}$ to both sides of (16), we obtain

$\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{i}\left({n}_{k}\right)=0.$ (17)

But then (16) reduces to

$\partial {q}_{i}\left({n}_{k}\right)=\frac{1}{{d}_{k}}\underset{k\in {\mathcal{I}}_{k}}{\sum }\partial {q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},$

which, in turn, implies

${\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.$ (18)

Clearly, if the transmission conditions (17), (18) hold at the multiple node ${n}_{k}$ , then (16) is also fulfilled. Thus, (16) is equivalent to the transmission conditions (17), (18). These new conditions (16) are now relaxed in an iterative scheme as follows. We use $l$ as iteration number.

${\left({p}^{k}\right)}^{l+1}+\sigma \partial {\left({q}^{k}\right)}^{l+1}=\sigma {\mathcal{S}}^{k}\left({\left(\partial {q}^{k}\right)}^{l}\right)-{\mathcal{S}}^{k}\left({\left({q}^{k}\right)}^{l}\right)=:{\left({g}^{k}\right)}^{l+1}.$ (19)

We have the following relations:

${\left({g}^{k}\right)}^{l+1}={\mathcal{S}}^{k}\left(2\sigma \partial {\left({q}^{k}\right)}^{l}-{\left({g}^{k}\right)}^{l}\right).$ (20)

This gives rise to the definition of a fixed point mapping. To this end, we need to look into the behavior of the interface in terms of ${g}^{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}$ , that is

$g\in \mathcal{X}:={\Pi }_{k\in {\mathcal{J}}^{M}}{\Pi }_{i\in {\mathcal{I}}_{k}},\text{\hspace{0.17em}}{‖g‖}_{\mathcal{X}}^{2}:=\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\frac{1}{\sigma }{|{g}_{i,k}|}^{2},$ (21)

$\mathcal{T}:\mathcal{X}\to \mathcal{X},$ (22)

${\left(\mathcal{T}g\right)}_{i,k}:={\mathcal{S}}^{k}{\left(2\sigma \partial \left({q}^{k}\right)-{g}^{k}\right)}_{i},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M},i\in {\mathcal{I}}_{k},$

${\left(\mathcal{T}g\right)}_{k}=\left\{{\left(\mathcal{T}\right)}_{i,k},\text{\hspace{0.17em}}i\in {\mathcal{T}}_{k}\right\},$

$\mathcal{T}g=\left\{{\left(\mathcal{T}g\right)}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\right\}.$

Now,

${‖\mathcal{T}g‖}_{\mathcal{X}}^{2}=\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\frac{1}{\sigma }{|{\mathcal{S}}^{k}{\left(2\mu \partial \left({q}^{k}\right)-{g}^{k}\right)}_{i}|}^{2}.$ (23)

We use the facts ${\sum }_{i\in {\mathcal{I}}_{k}}{\left({\mathcal{S}}^{k}{g}^{k}\right)}_{i}^{2}={\sum }_{i\in {\mathcal{I}}_{k}}{\left({g}^{k}\right)}_{i}^{2}$ and ${\sum }_{i\in {\mathcal{I}}_{k}}{\left({\mathcal{S}}^{k}{q}^{k}\right)}_{i}{\left({\mathcal{S}}^{k}{g}^{k}\right)}_{i}={\sum }_{i\in {\mathcal{I}}_{k}}{q}_{i}^{k}{g}_{i}^{k}$ and show

${‖\mathcal{T}g‖}_{\mathcal{X}}^{2}={‖g‖}_{\mathcal{X}}^{2}-4\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left({g}_{i}^{k}-\sigma \partial {q}_{i}^{k}\right)\partial {q}_{i}^{k}.$ (24)

We now formulate a relaxed version of a fixed point iteration: for $ϵ\in \left[0,1\right)$

${g}^{l+1}=\left(1-ϵ\right)\mathcal{T}\left({g}^{l}\right)+ϵ{g}^{l}.$ (25)

Up to now, the relations concerning the iteration at the interfaces do not involve the state equation explicitly. For the analysis of the convergence of the iterates, we need to specify the equations.

The Non-Overlapping Domain Decomposition

We are interested in the errors between the solutions to the problem (12) and

#Math_108# (26)

Thus, we introduce ${e}^{l+1}:={q}^{l+1}-q$ . Then ${e}^{l+1}$ solves a non-linear differential equation with nonlinearity ${g}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)$ , zero right hand side and homogeneous boundary conditions at the simple nodes. As we noted above, the full transmission conditions are equivalent to (16). Hence, the error satisfies the same iterative Robin-type boundary conditions as ${q}^{l+1}$ . We consider the following integration by parts formula after multiplying by a test function $\varphi$ .

$0=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{e}_{i}^{l+1}-{\beta }_{i}{\partial }_{xx}{e}_{i}^{l+1}+{g}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){\varphi }_{i}\text{d}x$ (27)

$=-\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\beta }_{i}{\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right){\varphi }_{i}\left({n}_{k}\right)$ (28)

#Math_116# (29)

We now take the test function to be equal to ${e}_{i}^{l+1}$ and obtain:

$\begin{array}{l}\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right){e}_{i}^{l+1}\left({n}_{k}\right)\\ =\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{e}_{i}^{l+1}{e}_{i}^{l+1}+{\beta }_{i}{\partial }_{x}{e}_{i}^{l+1}{\partial }_{x}{e}_{i}^{l+1}+\left({g}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}^{l+1}\right)\text{d}x.\end{array}$ (30)

We use the boundary condition at the interfaces in the form

${d}_{ik}{e}_{i}^{l+1}\left({n}_{k}\right)={g}_{ik}^{l+1}-\sigma {\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right),\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.$

This identity is used in the identity (24), evaluated for the error:

${‖\mathcal{T}g‖}_{\mathcal{X}}^{2}={‖g‖}_{\mathcal{X}}^{2}-4\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left({g}_{i}^{k}-\sigma \partial {e}_{i}^{k}\right)\partial {e}_{i}^{k}.$ (31)

We obtain

#Math_121# (32)

We assume

$\left({g}_{i}\left(x;s\right)-{g}_{i}\left(x;t\right)\right)\left(s-t\right)\ge 0,\text{\hspace{0.17em}}\forall x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}$ (33)

and define the bilinear form

${a}_{i}\left({\psi }_{i},\varphi \right):=\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{\psi }_{i}{\varphi }_{i}+{\beta }_{i}{\partial }_{x}{\psi }_{i}{\partial }_{x}{\varphi }_{i}\right)\text{d}x.$ (34)

We define the corresponding quadratic form applied to ${e}^{l}$

${a}_{i}\left({e}_{i}^{l},{e}_{i}^{l}\right):=\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{e}_{i}^{l}{e}_{i}^{l}+{\beta }_{i}{\partial }_{x}{e}_{i}^{l}{\partial }_{x}{e}_{i}^{l}\right)\text{d}x,$ (35)

which is certainly bounded below by ${‖{e}_{i}‖}_{{L}^{2}\left(0,{\mathcal{l}}_{i}\right)}$ . Then the error iteration is

${‖{g}^{l+1}‖}_{\mathcal{X}}^{2}={‖\mathcal{T}{g}^{l}‖}_{\mathcal{X}}^{2}={‖{g}^{l}‖}_{\mathcal{X}}^{2}-4\underset{i\in \mathcal{I}}{\sum }\text{ }\text{ }{a}_{i}\left({e}_{i}^{l},{e}_{i}^{l}\right)$ (36)

and, thus, the error does not increase. That it actually decreases to zero is shown next. But first we look at the relaxed version of the iteration (25). We take norms and calculate in order to obtain (for $ϵ\in \left[0,1\right]$

${‖{g}^{l+1}‖}_{\mathcal{X}}^{2}\le {‖{g}^{l}‖}_{\mathcal{X}}^{2}-4\left(1-ϵ\right)\underset{i\in \mathcal{I}}{\sum }\text{ }\text{ }{a}_{i}\left({e}_{i}^{l},{e}_{i}^{l}\right).$ (37)

We iterate in (36) or (37) down from $l$ to zero. Then we obtain

$\left\{{g}^{l}\right\}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{bounded},\text{\hspace{0.17em}}{a}_{i}\left({e}_{i}^{l},{e}_{i}^{l}\right)\to 0,\text{\hspace{0.17em}}l\to \infty .$ (38)

Clearly, for ${\alpha }_{i}>0$ , this shows that the ${H}^{1}\left(0,{\mathcal{l}}_{i}\right)$ -error strongly tends to zero. Then also the traces tend to zero as $l\to \infty$ and, therefore, the iteration converges.

Theorem 2. Under assumption (33), for each $ϵ\in \left[0,1\right)$ the iteration (25) with (26) and (21), (22) converges as $l\to \infty$ . The convergence of the solutions is in the sense of (38).

Example 3. We show a numerical example, where three edges span a tripod. The first edge (see Figure 1) satisfies homogeneous Dirichlet conditions at the exterior node, while for the other two edges satisfy homogeneous Neumann conditions at the exterior nodes. In particular, we take $dx=1/1000$ , ${\alpha }_{i}=10$ , ${f}_{1}=1$ , ${f}_{2}=0.1$ , ${f}_{3}=0.5$ . The nonlinearity is weighted by a factor $\gamma =1$ and there are 10 fixed point iterations in order to handle the nonlinearity. The system without domain decomposition is solved using the MATLAB routine bvp4c with error tolerance $tol=1e-10$ . The system with domain decomposition is solved with classical finite differences of second order. Figure 1 shows the tripod, where we display the original solutions and the ones obtained by the domain decomposition. No difference is visible. Notice the discontinuity of the state at the central node. This is contrast to the classical nodal conditions known in the literature, where the states are continous across the multiple node, while the Neumann traces satisfying the Kirchhoff condition. We display the individual solutions―again without and with domain decomposition in Figure 2. There is no visible difference. Figure 3 shows the nodal errors at the central node. We see the nodal errors regarding the conservation of flows and the two

Figure 1. The tripod with disciniuity at the central node.

Figure 2. The three edges individually.

Figure 3. Error history at the central node.

Figure 4. Iteration history L¥-errors.

continuity conditions of the derivatives at the central node. In Figure 4, we display the relative ${L}^{\infty }$ -errors of the solutions, where the errors are taken with respect to the computed solution without domain decomposition.

4. Domain Decomposition for Optimal Control Problems

We pose the following optimal control problem with Neumann boundary controls:

$\begin{array}{l}\underset{\left(q,u\right)}{min}I\left(q,u\right):=\frac{\kappa }{2}\underset{i\in \mathcal{I}}{\sum }{‖{q}_{i}-{q}_{i}^{0}‖}^{2}+\frac{\nu }{2}\underset{k\in {\mathcal{J}}_{N}^{S}}{\sum }{|{u}_{k}|}^{2}\\ \text{subject}\text{\hspace{0.17em}}\text{to}\\ {\alpha }_{i}{q}_{i}\left(x\right)-{\beta }_{i}{\partial }_{xx}{q}_{i}\left(x\right)+{g}_{i}\left(x;{q}_{i}\left(x\right)\right)={f}_{i}\left(x\right),\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{q}_{i}\left({n}_{k}\right)={u}_{k},\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ {\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M},\\ \underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.\end{array}$ (39)

The corresponding optimality system then reads as follows:

$\begin{array}{l}{\alpha }_{i}{q}_{i}-{\beta }_{i}{\partial }_{xx}{q}_{i}+{g}_{i}\left({q}_{i}\right)={f}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {\alpha }_{i}{\rho }_{i}-{\beta }_{i}{\partial }_{xx}{\rho }_{i}+{{g}^{\prime }}_{i}\left({q}_{i}\right){\rho }_{i}=-\kappa \left({q}_{i}-{q}_{i}^{0}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}{\rho }_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{q}_{i}\left({n}_{k}\right)=\frac{1}{\nu }{\rho }_{i}\left({n}_{k}\right),\text{\hspace{0.17em}}{\partial }_{x}{\rho }_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ {\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}{\beta }_{i}{\partial }_{x}{\rho }_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{\rho }_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\\ \underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{i}\left({n}_{k}\right)=0=\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\rho }_{i}\left({n}_{k}\right),\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.\end{array}$ (40)

The idea now is to use a domain decomposition similar to the original system on the network. We design a method that allows to interpret the decomposed optimality system (41) as an edge-wise optimality system of an optimal control problem formulated on an individual edge. To this end, we introduce the following local system:

$\begin{array}{l}{\alpha }_{i}{q}_{i}^{l+1}-{\beta }_{i}{\partial }_{xx}{q}_{i}^{l+1}={f}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {\alpha }_{i}{\rho }_{i}^{l+1}-{\beta }_{i}{\partial }_{xx}{\rho }_{i}^{l+1}=-\kappa \left({q}_{i}^{l+1}-{q}_{i}^{0}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {q}_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}{\rho }_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{q}_{i}^{l+1}\left({n}_{k}\right)=\frac{1}{\nu }{\rho }_{i}^{l+1}\left({n}_{k}\right),\text{\hspace{0.17em}}{\partial }_{x}{\rho }_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ {d}_{ik}{q}_{i}^{l+1}\left({n}_{k}\right)+{\lambda }_{k}{\partial }_{x}{q}_{i}^{l+1}\left({n}_{k}\right)+{\mu }_{k}{\partial }_{x}{\rho }_{i}^{l+1}\left({n}_{k}\right)\\ ={\lambda }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{q}_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{q}_{i}^{l}\left({n}_{k}\right)\right)+{\mu }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{\rho }_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{\rho }_{i}^{l}\left({n}_{k}\right)\right)\\ \text{ }-\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{j}^{l}\left({n}_{k}\right)-{d}_{ik}{q}_{i}^{l}\left({n}_{k}\right)\right)={g}_{ik}^{l+1},\\ {d}_{ik}{\rho }_{i}^{l+1}\left({n}_{k}\right)+{\lambda }_{k}{\partial }_{x}{\rho }_{i}^{l+1}\left({n}_{k}\right)-{\mu }_{k}{\partial }_{x}{q}_{i}^{l+1}\left({n}_{k}\right)\\ ={\lambda }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{\rho }_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{\rho }_{i}^{l}\left({n}_{k}\right)\right)-{\mu }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{q}_{j}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{q}_{i}^{l}\left({n}_{k}\right)\right)\\ \text{ }-\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\rho }_{k}^{l}\left({n}_{k}\right)-{d}_{ik}{\rho }_{i}^{l}\left({n}_{k}\right)\right)={h}_{ik}^{l+1},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.\end{array}$ (41)

The same arguments that led from (16), (15) to (17), (18) apply to show that, upon convergence as $l\to \infty$ , the system (41) tends to (40). Now, (41) decomposes the fully connected problem (40) to a problem on a single edge $i\in \mathcal{I}$ with inhomogeneous Robin-type boundary conditions. The question is as to whether the decomposed optimality system (41) is in fact itself an optimality system on that edge. If so, then it is possible to parallelize the optimization problems rather than the forward and backward solves. Let us, therefore, now consider the following optimization problems on a single edge. The idea is to introduce a virtual control that aims at controlling classical inhomogeneous Neumann condition including the iteration history at the interface as inhomogeneity to the Robin-type condition that appears in the decomposition. To this end, it is sufficient to consider three cases: a.) the edge $i$ connects a controlled Neumann $j\in {\mathcal{J}}_{N}^{S}$ node with a multiple node $k\in {\mathcal{J}}^{M}$ at which the domain decomposition is active, b.) the edge $i$ connects a controlled Neumann node $j\in {\mathcal{J}}_{N}^{S}$ with multiple node $k\in {\mathcal{J}}^{M}$ at which the domain decomposition is active, c.) the edge $i$ connects two multiple nodes $j,k\in {\mathcal{J}}^{M}$ .

Case a.):

#Math_162# (42)

Case b.):

#Math_163# (43)

Case c.):

#Math_164# (44)

Remark 4.1.

・ If we write down the optimality systems for (42), (43) and (44), respectively, and combine the results, we arrive at (41).

・ This shows that within the loop of iterations that restore the transmission conditions at the multiple nodes, we can reformulate the system (41) as the optimality system of an optimal control problem formulated on a single edge, with input data coming from the iteration history that involves all nodes adjacent at the ends of the given edge.

・ This means that we can actually decompose the optimization problem given on the graph into a sequence of local optimization problems given on an individual edge.

・ The resulting optimization problem on the individual edges are strictly convex, thus, admitting a unique global solution.

Remark 4.2. There are at least two ways to use the proposed ddm-approach.

1) In the first approach, we consider (40) and start with a guess for the adjoint variables ${\left({\rho }_{i}\right)}_{i\in \mathcal{I}}$ . This provides a guess for the controls ${\left({u}_{j}\right)}_{j\in {\mathcal{J}}^{S}}$ . Therefore, we can establish the states ${\left({q}_{i}\right)}_{i\in \mathcal{I}}$ . The states ${\left({q}_{i}\right)}_{i\in \mathcal{I}}$ , in turn, are inserted into the adjoint problem and that system is then solved for ${\left({\rho }_{i}\right)}_{i\in \mathcal{I}}$ , which closes the cycle. With this method, we keep the optimization in an outer loop and solve both the forward system for the states ${\left({q}_{i}\right)}_{i\in \mathcal{I}}$ and the adjoint system for ${\left({\rho }_{i}\right)}_{i\in \mathcal{I}}$ , individually. For given ${\left({q}_{i}\right)}_{i\in \mathcal{I}}$ , the adjoint system is a linear elliptic problem on the graph. To this system the ddm-method above applies and converges. As we have established above, the forward problem admits a convergent ddm-algorithm. This finally means that in the inner loop we can use convergent ddm-iterations for finding ${\left({q}_{i}\right)}_{i\in \mathcal{I}}$ and ${\left({\rho }_{i}\right)}_{i\in \mathcal{I}}$ . The effect of parallelization can, therefore, be used for the solves in the inner loop, while the outer loop is sequential.

2) In the second approach, we decompose the coupled system (40) to (41). The resulting decoupled problem is then the optimality system for the virtual optimal control problems (42), (43) or (44), as seen above. In this case, there is no outer loop other than the ddm-iteration which is completely parallel. Still, the local optimality systems have to be solved in a way describes in the first approach. Namely, we provide an initial guess for ${\rho }_{i}$ for each $i\in \mathcal{I}$ then solve for ${q}_{i}$ which is introduced then in the local adjoint equations. This is then followed by the solve for ${\rho }_{i}$ and the update of the boundary data ${g}_{ik},{h}_{ik}$ which are used in the communication at the next ddm-iteration. In this, admittedly, more elegant approach, the constrained minimization problem on the entire graph can be decomposed to minimization problems on a single edge. As we will see below, unfortunately, but expectedly, the convergence is no longer global as in the first approach, but rather local. This means that only if we start close to a solution of (40), or if we have a priori estimates and tune the parameters accordingly, we can prove convergence of the unique solutions of (41) to those of (40).

5. Wellposedness

5.1. Wellposedness of the Primal Problem

The semi-linear network problem (12) admits a unique solution. This is true, as the linear part of problem (12) describes a self-adjoint positive definite operator in the Hilbert space $\mathcal{H}$ :

$\mathcal{H}={\prod }_{i\in \mathcal{I}}{H}^{1}\left(0,{\mathcal{l}}_{i}\right)\text{ }\text{ }.$

Indeed, we also define the energy space

$\mathcal{V}:=\left\{q\in \mathcal{H}|{q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S},\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\right\}$

and the operator $\mathcal{A}$ as follows:

$\mathcal{A}q:={\alpha }_{i}{q}_{i}\left(x\right)-{\beta }_{i}{\partial }_{xx}{q}_{i}\left(x\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\mathcal{H}$

$\begin{array}{l}D\left(\mathcal{A}\right):=\left\{q\in \mathcal{V}|{\beta }_{i}{\partial }_{x}{q}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{q}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M},\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\partial }_{x}{q}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\right\}.\end{array}$ (45)

It is a matter of applying standard integration by parts to show that, indeed, $\mathcal{A}$ is symmetric and positive definite in $\mathcal{H}$ such that $\mathcal{A}$ can be extended as a self-adjoint operator in $\mathcal{H}$ . Then it is standard to show that $\mathcal{A}$ can be extended to a bounded coercive map from $\mathcal{V}$ into its dual ${\mathcal{V}}^{*}$ . If we assume (33) and define the Nemitskji operator $\mathcal{G}\left(q\right)\left(x\right):=g\left(q\left(x\right)\right)$ then $\mathcal{G}$ is strictly monotone and continuous. Hence, according to  , $\mathcal{A}+\mathcal{G}$ is strictly monotone and continuous and, hence, the semi-linear problem admits a unique solution $q\in \mathcal{V}$ . Clearly, for regular right hand sides $f$ , the solution is in $D\left(\mathcal{A}\right)$ .

5.2. Smoothness of the Control-to-State-Map

Let ${\stackrel{^}{q}}_{t}\left(\stackrel{^}{u}\right)$ be the solution of (12) with $u$ replaced with $u+t\stackrel{^}{u}$ and let $q$ be the solution of (12). We denote by $e:=\stackrel{^}{q}-q$ the difference of these solutions. We obtain

#Math_204# (46)

Dividing by $t$ and letting $t$ tend to zero in (12) implies with ${e}^{\prime }:={e}^{\prime }\left(u\right)\left(\stackrel{^}{u}\right)=\delta e\left(u\right)\left(\stackrel{^}{u}\right)$

$\begin{array}{l}{\alpha }_{i}{{e}^{\prime }}_{i}\left(x\right)-{\beta }_{i}{\partial }_{xx}{{e}^{\prime }}_{i}\left(x\right)+{{g}^{\prime }}_{i}\left({q}_{i}\right)\left(x\right){{e}^{\prime }}_{i}\left(x\right)=0,\text{\hspace{0.17em}}x\in \left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {\beta }_{i}{\partial }_{x}{{e}^{\prime }}_{i}\left({n}_{k}\right)={\beta }_{j}{\partial }_{x}{{e}^{\prime }}_{j}\left({n}_{k}\right),\text{\hspace{0.17em}}i\ne j\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\\ {{e}^{\prime }}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{{e}^{\prime }}_{i}\left({n}_{k}\right)={\stackrel{^}{u}}_{k},\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ \underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{{e}^{\prime }}_{i}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.\end{array}$ (47)

For the solution $q$ of (12), applying the standard Lax-Milgram Lemma, ${e}^{\prime }$ uniquely solves the linear elliptic network problem (47) and, therefore, satisfies standard energy estimates. As the cost function in (39) is convex, according to the classical Weierstrass theorem, problem (39) admits a unique solution. One can then verify the conditions for the Ioffe-Tichomirov Theorem  in order to establish the first order optimality conditions (40).

Theorem 4. Under the assumption (33), for $f\in {\mathcal{V}}^{*}$ , there exists a unique solution $q\in \mathcal{V}$ of (12). In addition, the mapping from $u$ into $q$ is Gateaux differentiable. Moreover, the optimal control problem (39) admits a unique solution. The optimal solution is characterized by the optimality system of first order (40).

5.3. A Priori Error Estimates for the Optimality System

We denote the errors ${e}_{i}^{l}:={q}_{i}^{l}-{q}_{i}$ and ${p}_{i}^{l}:={\rho }_{i}^{l}-{\rho }_{i}$ for $i\in \mathcal{I}$ and $l=0,1,\cdots$ . These errors solve the system equations:

$\begin{array}{l}{\alpha }_{i}{e}_{i}^{l+1}-{\beta }_{i}{\partial }_{xx}{e}_{i}^{l+1}+{g}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {\alpha }_{i}{p}_{i}^{l+1}-{\beta }_{i}{\partial }_{xx}{p}_{i}^{l+1}+{{g}^{\prime }}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right){p}_{i}^{l+1}+\left({{g}^{\prime }}_{i}\left({e}_{i}^{l+1}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}\\ =-\kappa {e}_{i}^{l+1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\left(0,{\mathcal{l}}_{i}\right),\text{\hspace{0.17em}}i\in \mathcal{I}\\ {e}_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}{p}_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{D}^{S}\\ {\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right)=\frac{1}{\nu }{p}_{i}^{l+1}\left({n}_{k}\right),\text{\hspace{0.17em}}{\partial }_{x}{p}_{i}^{l+1}\left({n}_{k}\right)=0,\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k},\text{\hspace{0.17em}}{n}_{k}\in {\mathcal{J}}_{N}^{S}\\ {d}_{ik}{e}_{i}^{l+1}\left({n}_{k}\right)+{\lambda }_{k}{\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right)+{\mu }_{k}{\partial }_{x}{p}_{i}^{l+1}\left({n}_{k}\right)\\ ={\lambda }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{e}_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{e}_{i}^{l}\left({n}_{k}\right)\right)+{\mu }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{p}_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{p}_{i}^{l}\left({n}_{k}\right)\right)\\ \text{ }-\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{j}^{l}\left({n}_{k}\right)-{d}_{ik}{e}_{i}^{l}\left({n}_{k}\right)\right)={g}_{ik}^{l+1},\\ {d}_{ik}{p}_{i}^{l+1}\left({n}_{k}\right)+{\lambda }_{k}{\partial }_{x}{p}_{i}^{l+1}\left({n}_{k}\right)-{\mu }_{k}{\partial }_{x}{e}_{i}^{l+1}\left({n}_{k}\right)\\ ={\lambda }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{p}_{k}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{p}_{i}^{l}\left({n}_{k}\right)\right)-{\mu }_{k}\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{\beta }_{j}{\partial }_{x}{e}_{j}^{l}\left({n}_{k}\right)-{\beta }_{i}{\partial }_{x}{e}_{i}^{l}\left({n}_{k}\right)\right)\\ \text{ }-\left(\frac{2}{{d}_{k}}\underset{j\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{k}^{l}\left({n}_{k}\right)-{d}_{ik}{p}_{i}^{l}\left({n}_{k}\right)\right)={h}_{ik}^{l+1},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}.\end{array}$ (48)

We prove the following

Lemma 5. The solutions ${e}_{i},{p}_{i}$ for $i\in \mathcal{I}$ of (48) satisfies the estimate

${‖{e}_{i}‖}_{{H}^{1}\left(0,{\mathcal{l}}_{i}\right)}^{2}+{‖{p}_{i}‖}_{{H}^{1}\left(0,{\mathcal{l}}_{i}\right)}^{2}+\gamma \left({e}_{i}{\left(0\right)}^{2}+{e}_{i}{\left({\mathcal{l}}_{i}\right)}^{2}+{p}_{i}{\left(0\right)}^{2}+{p}_{i}{\left({\mathcal{l}}_{i}\right)}^{2}\right)\le C\left\{{g}_{ij}^{2}+{h}_{ij}^{2}\right\}.$ (49)

More precisely, for ${\lambda }_{k}=\lambda ,\forall k\in {\mathcal{J}}^{M}$ , we obtain

$\left({e}_{i}{\left({\mathcal{l}}_{i}\right)}^{2}+{p}_{i}{\left({\mathcal{l}}_{i}\right)}^{2}\right)\le \frac{4\nu }{\lambda }\underset{i=1}{\overset{2}{\sum }}\left\{{g}_{ij}^{2}+{h}_{ij}^{2}\right\}.$ (50)

Remark 5.1. As a result, for small data ${g}_{ij},{h}_{ij}$ , we have small solutions.

Proof of Lemma 5: We multiply the equations in (48) by ${e}_{i}^{l+1}$ and ${p}_{i}^{l+1}$ , respectively, integrate and then use integration by parts. For the sake of brevity, we leave the full arguments to the reader.

5.4. Convergence

$\left(g,h\right)\in \mathcal{X}:={\prod }_{k\in {\mathcal{J}}^{M}}{\prod }_{i\in {\mathcal{I}}_{k}}{ℝ}^{2},\text{\hspace{0.17em}}{‖\left(g,h\right)‖}_{\mathcal{X}}^{2}:=\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left({|{g}_{i,k}|}^{2}+{|{h}_{i,k}|}^{2}\right),$ (51)

$\mathcal{T}:\mathcal{X}\to \mathcal{X},$ (52)

$\begin{array}{l}\mathcal{T}{\left(g,h\right)}_{i,k}:=\left({\mathcal{S}}^{k}{\left(2\left({\lambda }_{k}{\partial }_{x}\left({e}^{k}\right)+{\mu }_{k}{\partial }_{x}\left({p}^{k}\right)\right)-{g}^{k}\right)}_{i},\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathcal{S}}^{k}{\left(2\left({\lambda }_{k}{\partial }_{x}\left({p}^{k}\right)-{\mu }_{k}{\partial }_{x}\left({e}^{k}\right)\right)-{h}^{k}\right)}_{i}\right),\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M},i\in {\mathcal{I}}_{k},\end{array}$

$\mathcal{T}{\left(g,h\right)}_{k}=\left\{\left(\mathcal{T}\right){\left(g,h\right)}_{i,k},\text{\hspace{0.17em}}i\in {\mathcal{I}}_{k}\right\},$

$\mathcal{T}\left(g,h\right)=\left\{\mathcal{T}{\left(g,h\right)}_{k},\text{\hspace{0.17em}}k\in {\mathcal{J}}^{M}\right\}.$

Now,

#Math_233# (53)

We multiply the state equation for the errors ${e}_{i}$ , ${p}_{i}$ by ${e}_{i}$ and ${p}_{i}$ , respectively.

$\begin{array}{c}0=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{e}_{i}-{\beta }_{i}{\partial }_{xx}{e}_{i}+{g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}\text{d}x\\ =-\underset{k\in \mathcal{I}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right){e}_{i}\left({n}_{k}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{{\alpha }_{i}{\left({e}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{e}_{i}\right)}^{2}+\left({g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}\right\}\text{d}x,\end{array}$ (54)

$\begin{array}{c}0=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{p}_{i}-{\beta }_{i}{\partial }_{xx}{p}_{i}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}+{\kappa }_{i}{e}_{i}\right){p}_{i}\text{d}x\\ =-\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right){p}_{i}\left({n}_{k}\right)+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{{\alpha }_{i}{\left({p}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{p}_{i}\right)}^{2}\\ \text{ }+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}^{2}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{p}_{i}+{\kappa }_{i}{e}_{i}{p}_{i}\right\}\text{d}x.\end{array}$ (55)

Now, we reverse the roles and obtain

$\begin{array}{c}0=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{e}_{i}-{\beta }_{i}{\partial }_{xx}{e}_{i}+{g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){p}_{i}\text{d}x\\ =-\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right){p}_{i}\left({n}_{k}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{{\alpha }_{i}{e}_{i}{p}_{i}+{\beta }_{i}{\partial }_{x}{e}_{i}{\partial }_{x}{p}_{i}+\left({g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}{p}_{i}\right\}\text{d}x,\end{array}$ (56)

$\begin{array}{c}0=\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({\alpha }_{i}{p}_{i}-{\beta }_{i}{\partial }_{xx}{p}_{i}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}d+{\kappa }_{i}{e}_{i}\right){e}_{i}\text{d}x\\ =-\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right){e}_{i}\left({n}_{k}\right)+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{{\alpha }_{i}{p}_{i}{e}_{i}+{\beta }_{i}{\partial }_{x}{p}_{i}{\partial }_{x}{e}_{i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}{e}_{i}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{e}_{i}+{\kappa }_{i}{e}_{i}^{2}\right\}\text{d}x.\end{array}$ (57)

From now on we assume that all ${\lambda }_{k}=:\lambda ,{\mu }_{k}:=\mu$ are independent of the node. We multiply (54) and (56) by λ and (55), (57) by μ and add in the following way λ(54)+ μ(27), λ(55)- μ(56). This leads to

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right)\\ =\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\lambda \left\{{\alpha }_{i}{\left({e}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{e}_{i}\right)}^{2}+\left({g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\mu \left\{{\alpha }_{i}{e}_{i}{p}_{i}+{\beta }_{i}{\partial }_{x}{e}_{i}{\partial }_{x}{p}_{i}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){e}_{i}{p}_{i}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{e}_{i}+{\kappa }_{i}{e}_{i}^{2}\right\}\text{d}x\end{array}$

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)\\ =\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\lambda \left\{{\alpha }_{i}{\left({p}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{p}_{i}\right)}^{2}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}^{2}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{p}_{i}+{\kappa }_{i}{e}_{i}{p}_{i}\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\mu \left\{{\alpha }_{i}{e}_{i}{p}_{i}+{\beta }_{i}{\partial }_{x}{e}_{i}{\partial }_{x}{p}_{i}+{g}_{i}\left({e}_{i}+{q}_{i}\right){e}_{i}{p}_{i}\right\}\text{d}x\end{array}$

We add the latter equations and obtain

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right)\\ +\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)\\ =\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\lambda \left\{{\alpha }_{i}{e}_{i}^{2}+{\alpha }_{i}{p}_{i}^{2}+{\beta }_{i}{\left({\partial }_{x}{e}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{p}_{i}\right)}^{2}\right\}\text{d}x+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{\kappa }_{i}\left(\mu {e}_{i}^{2}+\lambda {p}_{i}{e}_{i}\right)\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\mu {\left({p}_{i}\left({n}_{k}\right)\right)}^{2}-\lambda {d}_{ik}{e}_{i}\left({n}_{k}\right){p}_{i}\left({n}_{k}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\lambda \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{\left({g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}+{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){p}_{i}^{2}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{y}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{p}_{i}\right\}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\mu \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{{{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right){e}_{i}{p}_{i}+\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}{e}_{i}-\left({g}_{i}\left({e}_{i}+{q}_{i}\right)-{g}_{i}\left({q}_{i}\right)\right){e}_{i}{p}_{i}\right\}\text{d}x\\ \ge \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\lambda \left\{{\alpha }_{i}\left({e}_{i}^{2}+{p}_{i}^{2}\right)+{\beta }_{i}{\left({\partial }_{x}{e}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{p}_{i}\right)}^{2}\right\}\text{d}x+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{\kappa }_{i}\left(\mu {e}_{i}^{2}+\lambda {p}_{i}{e}_{i}\right)\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\mu {\left({p}_{i}\left({n}_{k}\right)\right)}^{2}-\lambda {d}_{ik}{e}_{i}\left({n}_{k}\right){p}_{i}\left({n}_{k}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}\right)\right){\rho }_{i}\left(\lambda {p}_{i}+\mu {e}_{i}\right)+\mu \left({{g}^{\prime }}_{i}\left({e}_{i}+{q}_{i}\right)-{{g}^{\prime }}_{i}\left({q}_{i}+\theta {e}_{i}\right)\right){e}_{i}{p}_{i}\text{d}x\\ =I+II+III.\end{array}$ (58)

We are going to estimate the third integral. For that matter we assume that ${{g}^{\prime }}_{i}\left(s\right)$ admits a Lipschitz constant ${L}_{i}$ .

$\begin{array}{c}III\le \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{L}_{i}|{\rho }_{i}||{e}_{i}|\left(\lambda |{p}_{i}|+\mu |{e}_{i}|\right)+\mu {L}_{i}\left(1-\theta \right)|{p}_{i}|{\left({e}_{i}\right)}^{2}\text{d}x\\ \le \underset{i\in \mathcal{I}}{\sum }\text{ }\text{ }{L}_{i}\left\{\left(\left(\mu +\frac{\lambda }{2}\right){‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}+\mu {‖{p}_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{e}_{i}^{2}\text{d}x+\frac{\lambda }{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{p}_{i}^{2}\text{d}x\right\}\end{array}$ (59)

The second term contains quadratic expressions an mixed terms. The mixed terms need to be absorbed in the quadratics ones

$\begin{array}{l}\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{\kappa }_{i}\left(\mu {e}_{i}^{2}+\lambda {p}_{i}{e}_{i}\right)\text{d}x\ge \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{\kappa }_{i}\left(\left(\mu -\frac{\lambda }{\delta }\right){e}_{i}^{2}-\lambda \delta {p}_{i}^{2}\right)\text{d}x\\ \frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\mu {\left({p}_{i}\left({n}_{k}\right)\right)}^{2}-\lambda {d}_{ik}{e}_{i}\left({n}_{k}\right){p}_{i}\left({n}_{k}\right)\right)\\ \ge \frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\left(\mu -\frac{\lambda }{\delta }\right){\left({p}_{i}\left({n}_{k}\right)\right)}^{2}-\lambda \delta {e}_{i}{\left({n}_{k}\right)}^{2}\right)\end{array}$ (60)

We now combine (58), (59), (60) in order to obtain

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right)\\ +\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)\\ \ge \underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\lambda \left\{{\alpha }_{i}\left({e}_{i}^{2}+{p}_{i}^{2}\right)+{\beta }_{i}{\left({\partial }_{x}{e}_{i}\right)}^{2}+{\beta }_{i}{\left({\partial }_{x}{p}_{i}\right)}^{2}\right\}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{\kappa }_{i}\left(\left(\mu -\frac{\lambda }{\delta }\right){e}_{i}^{2}-\lambda \delta {p}_{i}^{2}\right)\text{d}x+\frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\left(\mu -\frac{\lambda }{\delta }\right){\left({p}_{i}\left({n}_{k}\right)\right)}^{2}-\lambda \delta {e}_{i}{\left({n}_{k}\right)}^{2}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\underset{i\in \mathcal{I}}{\sum }\text{ }\text{ }{L}_{i}\left\{\left(\left(\mu +\frac{\lambda }{2}\right){‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}+\mu {‖{p}_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{e}_{i}^{2}\text{d}x+\frac{\lambda }{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{p}_{i}^{2}\text{d}x\right\}\end{array}$ (61)

We now group the corresponding quadratic expressions.

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right)\\ +\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)\\ \ge \underset{i\in \mathcal{I}}{\sum }\left(\lambda {\alpha }_{i}+{\kappa }_{i}\left(\mu -\frac{\lambda }{\delta }\right)-{L}_{i}\left(\left(\mu +\frac{\lambda }{2}\right){‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}+\mu {‖{p}_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)-\lambda \delta {c}_{i}^{2}\left(n\right)\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{e}_{i}^{2}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\lambda {\alpha }_{i}-\lambda \delta -{L}_{i}\frac{\lambda }{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{p}_{i}^{2}\text{d}x+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left\{\lambda \left(1-\delta {c}^{1}\left(n\right)\right){\left({\partial }_{x}{e}_{i}\right)}^{2}+\lambda {\left({\partial }_{x}{p}_{i}\right)}^{2}\right\}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }\left(\left(\mu -\frac{\lambda }{\delta }\right){\left({p}_{i}\left({n}_{k}\right)\right)}^{2}\right),\end{array}$ (62)

where we have used the boundary estimate due to Kato  . We now need to discuss under which configuration of the parameters the coefficients in front of the quadratic terms

#Math_252# (63)

can be made positive. Moreover, if $\mu -\frac{\lambda }{\delta }\le 0$ , we need to absorb the corresponding boundary term again using the estimate  . It is obvious that ${b}_{i}$ can be positive iff $\left({\alpha }_{i}-\delta -{L}_{i}\frac{1}{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)$ and $\lambda$ are positive. The only

parameters that we can select in order achieve positive, respectively, non- negative coefficient are the two parameters ${\kappa }_{i},{\nu }_{i}$ coming from the cost function and the parameters $\lambda \ge 0,\mu \ge 0$ provided for the algorithm. Moreover, the coefficient ${\alpha }_{i}>0$ becomes relevant. We recall the meaning of ${\alpha }_{i}$ : it is $\frac{1}{\Delta t}$ ! So it becomes obvious from (63) that the norm of the reference solution to the adjoint equation ${‖{\rho }_{i}‖}_{\infty }$ and the Lipschitz-constant ${L}_{i}$ , reflecting the stiffness of the nonlinear term come into play. We thus need small $\Delta t>0$ to compensate the remaining terms. The question to be discussed below then is as to whether the maximum-norm of the solution ${\rho }_{i}$ of the adjoint equation which, in turn, involves ${\alpha }_{i}$ is small against ${\alpha }_{i}$ . Only in this case, we can choose $\lambda >0$ in order to have ${b}_{i}>0$ . If, on the other side, $\lambda =0$ , we have to compensate ${L}_{i}‖{p}_{i}‖$ , in this case the adjoint error, by choosing ${\kappa }_{i}$ sufficiently large and $\mu >0$ in order to have ${a}_{i}>0$ . The appearance of the adjoint error, in case $\mu >0$ , necessitates an a posteriori error estimate. We discuss the following cases $\lambda =0,\mu >0$ , $\lambda >0,\mu =0$ and $\lambda >0,\mu >0$ :

[Case 1.] $\lambda =0,\mu >0$ :

${a}_{i}=\mu \left({\kappa }_{i}-{L}_{i}\left({‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}+{‖{p}_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)\right),\text{\hspace{0.17em}}{b}_{i}:=0$ (64)

In this case

$\begin{array}{l}\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right).\\ +\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right){\beta }_{i}\left(\lambda {\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)\\ \ge \underset{i\in \mathcal{I}}{\sum }\text{ }\text{ }\mu \left({\kappa }_{i}-{L}_{i}\left({‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}+{‖{p}_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{e}_{i}^{2}\text{d}x+\frac{\mu }{\nu }\underset{k\in {\mathcal{J}}^{M}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{\left({p}_{i}\left({n}_{k}\right)\right)}^{2}.\end{array}$ (65)

As mentioned above, this case involves both the reference adjoint and the adjoint error. Moreover, in this case the convergence of the iteration is determined solely by the choice of the cost parameters in that ${\kappa }_{i}$ has to be sufficiently large, while ${\alpha }_{i}$ plays no role.

[Case 2.] $\lambda >0,\mu =0$ :

$\begin{array}{l}{a}_{i}:=\lambda \left({\alpha }_{i}-{\kappa }_{i}\frac{1}{\delta }-{L}_{i}\left(\frac{1}{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right)-\delta {c}_{i}^{2}\left(n\right)\right)\\ {b}_{i}:=\lambda \left({\alpha }_{i}-\delta -{L}_{i}\frac{1}{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}\right).\end{array}$ (66)

In this case we obtain

$\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{e}_{i}\left({n}_{k}\right)\left(\lambda {\beta }_{i}{\partial }_{x}{e}_{i}\left({n}_{k}\right)+\mu {\beta }_{i}{\partial }_{x}{p}_{i}\left({n}_{k}\right)\right)$

$+\underset{k\in \mathcal{J}}{\sum }\underset{i\in {\mathcal{I}}_{k}}{\sum }{d}_{ik}{p}_{i}\left({n}_{k}\right){\beta }_{i}\left(\lambda {\partial }_{x}{p}_{i}\left({n}_{k}\right)-\mu {\partial }_{x}{e}_{i}\left({n}_{k}\right)\right)$ (67)

$\ge \lambda \underset{i\in \mathcal{I}}{\sum }\left\{\left({\alpha }_{i}-{\kappa }_{i}\frac{1}{\delta }-{L}_{i}\frac{1}{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}-\delta {c}_{i}^{2}\left(n\right)\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{e}_{i}^{2}\text{d}x$

$+\left({\alpha }_{i}-\delta -{L}_{i}\frac{1}{2}{‖{\rho }_{i}‖}_{{L}^{\infty }\left(0,{\mathcal{l}}_{i}\right)}-\delta {c}_{i}^{2}\left(n\right)\right)\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}{p}_{i}^{2}\text{d}x$ (68)

$+\underset{i\in \mathcal{I}}{\sum }\underset{0}{\overset{{\mathcal{l}}_{i}}{\int }}\left(1-\delta {c}^{1}\left(n\right)\right){\left({\partial }_{x}{e}_{i}\right)}^{2}+\left(1-\delta {c}^{1}\left(n\right)\right){\left({\partial }_{x}{p}_{i}\right)}^{2}\text{d}x\right\}$

In this case ${\alpha }_{i}$ has to absorb all negative terms and, in fact, the penalty parameter ${\kappa }_{i}$ acts in the reverse sense than in Case 1. One needs to balance $\kappa$ , $\delta$ and $n$ in the coefficients ${c}^{1}\left(n\right),{c}_{i}^{2}\left(n\right)$ in order to make the two coefficients of all quadratic terms positive. We are then left with question as to whether the norm $‖{\rho }_{i}‖$ is small against ${\alpha }_{i}$ for suitably large ${\alpha }_{i}$ . Now, the adjoint ${\rho }_{i}$ of the full network problem has as data the right hand side $-{\kappa }_{i}\left({q}_{i}-{q}_{i}^{0}\right)$ and as ${q}_{i}$ -dependent coefficient ${{g}^{\prime }}_{i}\left({q}_{i}\right)$ . For a given ${q}_{i}$ , the corresponding network equation is linear and by Lax-Milgram’s lemma, the solution ${\rho }_{i}$ satisfies an estimate against the data, which, in turn, depend on the original solution ${q}_{i}$ .

[Case 3.] $\lambda >0,\mu >0$ : In this case ${a}_{i},{b}_{i}$ in (63) need to be positive. This can be achieved in general if ${\alpha }_{i}$ large and ${\kappa }_{i}$ small and $\mu$ is large compared to $\lambda$ . A more explicit analysis can be done, but is skipped for the sake of space.

Theorem 6. Under the positivity assumptions in Cases 1, 2, 3, the iterations converge and the solutions ${q}^{l}={\left({q}_{i}^{l}\right)}_{i\in \mathcal{I}}$ of the iterative process (41), describing the local optimality systems on the individual edges, converge to the solution of the optimality system (40). In Case 2,3, ${q}_{i}^{l},{p}_{i}^{l}$ converge to ${q}_{i}^{0},{p}_{i}^{0}$ in the energy sense. In Case 1, convergence takes place in the ${L}^{2}$ -sense.

Example 7. We consider the following numerical example. We take for $\alpha =10$ , $\kappa =10$ , $\nu =1$ and ${f}_{i}=\alpha \ast x$ and Neumann controls at all simple nodes. Clearly, the exact solution of the linear problem, i.e. $\gamma \ast g\left(q\right)$ , with $\gamma =0$ and $g\left(q\right)=|q|q$ , is ${q}_{i}=x,i=1,2,3$ , where the adjoints have Dirichlet traces equal to 1 with Neumann traces being 0 by construction. This, however, is achieved only for very large penalty $\kappa$ . We compute the solution with nonlinearity $\gamma =0.1$ using the MATLAB routine bvp4 with tolerance $e.-6$ . As for the domain decomposition, we take 15 steps. The nonlinearity is taken into account using an inner fixed-point loop, where we take 15 iterations. The parameters $\lambda ,\mu$ are chosen as 0.1, respectively. The results are shown in Figures 5-10. In Figure 11 and Figure 12, we display the numerical results for the same setup, but now with ${\alpha }_{i}=1000$ , ${\kappa }_{i}=100$ , $\mu =1$ , $\lambda =0$ , $\gamma =0.1$ with relaxation parameter $ϵ=0.1$ . Figure 13 reveals the fact that due to the optimality of the functions ${q}_{i}\left(x\right)=x$ , the adjoint, as being forced to have zero Neumann data, is zero in almost the entire interval and is nontrivial in the last part only. Clearly, the three reference solutions and adjoints are plotted on top of the ddm-solutions. No difference is visible.

Figure 5. Forward solutions.

Figure 7. Optimal tripod.

Figure 8. Errors.

Figure 9. Errors of relaxed iteration.

Figure 11. Optimal tripod.

Figure 12. Optimal state.

Acknowledgements

The author acknowledges support by the DFG TRR154 Mathematische Modellierung, Simulation und Optimierung am Beispiel von Gasnetzwerken (TPA05).

Conflicts of Interest

The authors declare no conflicts of interest.

  Brouwer, J., Gasser, I. and Herty, M. (2011) Gas Pipeline Models Revisited: Model Hierarchies, Nonisothermal Models, and Simulations of Networks. Multiscale Modeling & Simulation, 9, 601-623. https://doi.org/10.1137/100813580  LeVeque, R.J. (1992) Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, Cambridge.  LeVeque, R.J. (1992) Numerical Methods for Conservation Laws. Birkh?user-Verlag, Basel.  Smoller, J. (1994) Shock Waves and Reaction—Diffusion Equations (Volume 258 of Grundlehren der Mathematischen Wissenschaften). Springer-Verlag, Berlin.  Gugat, M., Leugering, G., Martin, A., Schmidt, M., Sirvent, M. and Wintergerst, D. (2016) Towards Simulation Based Mixedinteger Optimization with Differential Equations. Submitted. https://opus4.kobv.de/opus4-trr154/frontdoor/index/index/docId/63  Gugat, M., Leugering, G., Martin, A., Schmidt, M., Sirvent, M. and Wintergerst, D. (2017) MIP-Based Instantaneous Control of Mixed-Integer PDE-Constrained Gas Transport Problems. Submitted. https://opus4.kobv.de/opus4-trr154/frontdoor/index/index/docId/140  Hante, F., Leugering, G., Martin, A., Schewe, L. and Schmidt, M. (2017) Challenges in Optimal Control Problems for Gas and Fluid Flow in Networks of Pipes and Canals: From Modeling to Industrial Applications. In: ISIAM-Proceedings, Springer- Verlag, Berlin. https://opus4.kobv.de/opus4-trr154/frontdoor/index/index/docId/121  Hinze, M. and Volkwein, S. (2002) MIP-Based Instantaneous Control of Mixed- Integer PDE-Constrained Gas Transport Problems. Nonlinear Anal., 50, 1-26.  Hinze, M. and Volkwein, S. (2001) Instantaneous Control of Vibrating String Networks. In: Online Optimization of Large Scale Systems, Springer-Verlag, Berlin, 229-249.  Lagnese, J.E and Leugering, G. (2003) Domain Decomposition Methods in Optimal Control of Partial Differential Equations (Volume 148 of International Series of Numerical Mathematics). Birkh?user Verlag, Basel.  Roubicek, T. (2013) Nonlinear Partial Differential Equations with Applications (Volume 153 of International Series of Numerical Mathematics). 2nd Edition, Birkh?user-Basel, Basel. https://doi.org/10.1007/978-3-0348-0513-1  Kogut, P.I. and Leugering, G. (2011) Optimal Control Problems for Partial Differential Equations on Reticulated Domains, Systems and Control: Foundations and Applications. Springer, New York. https://doi.org/10.1007/978-0-8176-8149-4  Kato, T. (1976) Perturbation Theory for Linear Operators. 2nd Edition, Springer- Verlag, Berlin-New York, Grundlehren der Mathematischen Wissenschaften, Band 132. 