Numerical Solutions to the Robin Inverse Problem with Nonnegativity Constraints ()
1. Introduction
Consider the Robin boundary value problem for the Laplace equation on a smooth bounded domain
:
(1.1)
Here
is the Robin coefficient and
is a prescribed input function on
. It is assumed that the support of p (denoted by
) and the support of g are nonempty but non-overlapping. Then the Robin inverse problem is described as: to recover the Robin coefficient p on
from given values of the solution
on
with
. Such problems arise from the modeling of various nondestructive evaluation techniques where the solution U can be measured directly on an accessible part
of the boundary in hope to extract the quantity of interest p defined on an inaccessible portion
of the boundary. For example, in the evaluation of metal-to-silicon contact quality in semiconductor transistors (e.g. [1] [2] [3]), voltage measurements of U on an accessible
in response to an applied current input pattern g are used to infer the contact quality modeled by p on the inaccessible contact window
. A similar situation arises from detection of material corrosion damages (e.g. [4] [5]), where static potential measurement u on an accessible part of the material boundary
is taken to detect possible corrosion characterized by the conductivity profile p on an inaccessible
.
For this inverse problem, there have been many numerical studies in the literature, based either on the PDE model (e.g. [5] [6] [7] and references therein) or on an integral equation formulation (e.g. [3] [8] [9] and references therein). Most of these papers have focused on the recovery of the Robin coefficients by iterative schemes for the minimizers of some objective functional that consists of both a data fitting term and a certain form of regularization, and the nonnegativity constraint for the Robin coefficient is often ignored, except for some simple truncation strategies on the side to ensure the well-posedness of the forward solver. It is well known that the inclusion of nonnegativity constraints improves the quality of solutions to ill-posed inverse problems when the quantity of interest is known to be nonnegative (e.g. [10], Chapter 9). In this study, we propose a strategy to incorporate nonnegativity constraints in the iterative schemes, in the form of an additional term in the objective functional to penalize negativity throughout the iteration process. This approach is natural in the least-squares formulation for solving inverse problems or illposed problems in general and provides a convenient and computationally economical alternative to any constrained optimization methods. We demonstrate the simplicity of this strategy in both linear and nonlinear least-squares solution methods for the Robin inverse problem, and through numerical examples, we illustrate the effectiveness of the proposed approach in dealing with the ill-posedness, resulting in much improved recovery results.
Our presentation is organized as follows. In Section 2, we reformulate the problem (1.1) into its equivalent boundary integral equation formulation. Based on this formulation, we present two least-squares solution methods for the inverse problem in Section 3, with nonnegativity constraints incorporated into the iterative schemes. In Section 4, we conclude with discussions of numerical implementation of the recovery algorithms, and examples of numerical results to illustrate how the added nonnegativity constraints have helped to improve solutions to the inverse problem.
2. Integral Equation Formulation
Because the equation in (1.1) is Laplacian over the domain
while the quantities of interest, the Robin coefficient p and data measurement
, are both defined on the boundary
only, it is natural and advantageous to formulate this boundary value problem (1.1) as a boundary integral equation on
.
Let
stand for the fundamental solution for the Laplacian in
:
Then the boundary value problem (1.1) for
in
is equivalent to the following boundary integral equation for
on
(as the trace of U) (see e.g. [11] [12] [13]):
(2.1)
In the operator form, (2.1) can be written as
(2.2)
with the single and double-layer potential operators defined by
Note that the operators have the following mapping properties (e.g. [13]):
and
. It is well known that the Robin boundary-value problem (2.1) admits unique solution
in
for given nonnegative model parameters p and g on
, and thus the equivalent boundary integral equation (2.1) yields nonnegative solution u on
as well (see e.g. [11]).
Alternatively, we can use single-layer potential to express the solution
on
by
(2.3)
where the density function
on
solves the boundary integral equation
(2.4)
In operator form, (2.4) becomes
where the dual operator
of
is given by
In our numerical implementation, we use the direct formulation (2.1) as the model equation that relates between u and p, and only use the alternative formulation (2.3) - (2.4) for generating synthetic data
which, after the addition of random noise, are used to test our numerical algorithms for recovering the Robin coefficient p.
3. Solution Methods for the Inverse Problem
A common approach to the inverse problem of finding p on
from given boundary measurement
of u on
is to regard u as dependent on p through the model equation (2.2), and then solve the data-fitting nonlinear equation
for p by least-squares methods (see e.g. [3] [7]). In this section, we consider two slightly different solution methods, each of which is based on solving
simultaneously from the system consisting of both the model equation (2.2) and the data fitting equation
on
by least-squares (see e.g. [9] [11] [14]). Then in the iterative scheme for solving the least-squares problem, we will introduce nonegativity constraints in the form of a penalty term in the minimizing objective functional, so to steer toward nonnegative solutions.
3.1. A Linear Least-Squares Method
As in [9] [11], we introduce a new variable
(3.1)
in place of
. Then equation (2.2) becomes linear in both
and
:
(3.2)
where
Denote the restriction operator from
to
by
. That is,
for
. Then the given measurement
of u on
can be expressed as
(3.3)
We recast the inverse problem of finding p from given
as directly solving for
from (3.2) - (3.3) as a linear system:
(3.4)
Here
denotes the zero operator from
to
. Once u on
and v on
are found from (3.4), then the Robin coefficient p on
can be found from u and v by the simple relation (3.1).
System (3.4) is a linear system for
, but is ill-posed. In addition to using the classical Tikhonov regularization method to address the ill-posedness (as in [11]), we wish to incorporate an additional term to penalize negativity in w and encourage nonnegagive solutions. Thus we seek the minimizer of the following objective functional that consists of the system residual from (3.4), a regularization for w, and a penalty for negative part of w:
(3.5)
with regularization parameter
and penalty parameter
. The regularization is defined through the operator
where
and
are the second-order derivative operator for periodic boundary condition and for zero boundary condition respectively, and the penalty for the negative part of w by
The function
is differentiable except at
, and when its derivative is needed, we use the derivative of its smooth approximation:
for some very small
(e.g. the machine epsilon). Hence we can arrive at the first-order optimality condition of
:
(3.6)
where
denotes the dual of
.
In (3.6), while the parts from the linear system and Tikhonov regularization are linear, the part from the penalty term is not. Hence we devise an iterative scheme to solve (3.6) using the Gauss-Newton direction as follows. At each iterate
, the step increment
is determined by the linear equation
(3.7)
where
and the next iterate is set to be
(3.8)
We note that, when
, i.e. when the penalty term is not present, no iteration is needed, and this reduces to the direct method introduced in [11]. However, such a direct method may produce w that admits negative values, and thus it requires some remedy, such as simple truncation, in order to obtain the nonnegative Robin coefficient p from (3.1) (see [9] [11]). By incorporating the penalty term in the least-squares formulation, we are able to ensure the nonnegativity of the solution it produces, with only a few additional iterations, so the Robin coefficient p can be readily obtained from (3.1). Moreover, when the penalty term is active, it steers the iteration to produce better solutions, especially in situations where the ill-posedness is more severe.
3.2. A Nonlinear Least-Squares Method
In this method, we regard both u on
and p on its support
as variables to be found from given data
, and cast the inverse Robin problem as finding
from the nonlinear system of equations consisting of both the model equation (2.2) and data fitting equation (3.3):
(3.9)
Here p in
to be solved refers to the part of the Robin coefficient on its support
, while in the formulations p remains as defined on
with the understanding that its support is contained in
. It is noted that
is linear in u and p individually, but not jointly in z.
To solve the nonlinear system (3.9) for
, we consider minimization of the least-squares functional of the system residual, with the addition of both a regularization term and a penalty term:
(3.10)
with regularization
and penalty
as in (3.5). Note that the case without penalty (when
) was considered in [14]. For minimization of J above, we propose the Gauss-Newton iteration scheme: at each iterate
, the new increment
is determined by the Gauss-Newton step from J:
(3.11)
where
and
is the derivative operator of
given by
where
denotes operator composition, and
the dual of
. Once
is found from (3.11), the next iterate is set to be
(3.12)
Note that the addition of the penalty term in (3.10) results in a similar iterative scheme (3.11), and it does not add much computational cost. As it steers the iterations to satisfy the nonnegativity constraints, it does improve the quality of the solutions, as numerical examples in the next section will illustrate.
4. Numerical Examples
Given a regular 1-periodic parametrization with counterclockwise orientation for the boundary
:
where
and
for
, the integral operators in (2.1) and (2.4) can be expressed explicitly as integral operators on
for
as
for
, where we denote
. The kernel
is weakly singular while
and
are continuous. The singularity in
can be rearranged as
with continuous kernel
so that integrals involving this singularity can be dealt with by exact integration, as we employ Nyström’s method with trigonometric interpolation on regular grids (see e.g. [12], Chapter 12).
In our examples, we take
as the elliptic region for the sake of simplicity:
We should note that this choice of the aspect ratio of the ellipse gives rise to a more severely ill-posed case of the Robin inverse problem than most cases in other studies, yet as observed consistently in our numerical experiments and demonstrated in examples below, the inclusion of nonnegativity constraints in the iterative schemes does help to combat the ill-posedness effectively and produce markedly better results for p.
Standard parametrization of the ellipse is used:
The two segments
(where p is defined) and
(where
is given) are chosen as
The input function g is set as
and the true profile for p as
Discretization grid size is set to
for discretization of the integral operators and the systems
in (3.4) and
in (3.9). To generate the synthetic data
on
, we solve the discretized alternative formulation (2.3) - (2.4), and add to it uniformly distributed random noise of noise level
, relative to the L2 norm of
.
In both iterative methods, we stop the iteration when the L2 norm of the Gauss-Newton step
from (3.7) or (3.11) is less than 10−10.
Our selection of the regularization parameter
is guided by the idea of the normalized cumulative periodogram (NCP) method (see e.g. [15]), which is based on monitoring the noise pattern in the data-fitting residual
in (3.4) or (3.9). If the residual is dominated by white noise, then
is a suitable choice; if it is dominated by high frequency noise, the current parameter
shall be increased, and if it is dominated by a low frequency signal, then
shall be reduced to a smaller value ( [16]). See also ( [10], Chapter 7) for other methods of selecting a proper regularization parameter value for illposed problems.
The performance of the algorithms is less sensitive to the choice of
, and they work very similarly for values of
from the range between 10−1 to 102. We set
in our experiments.
We first experiment using the linear least-squares method (3.7) - (3.8), and the result of one such example is presented in Figure 1. We select
. Starting with
, it typically takes only a few iterations (less than 10) to satisfy the stopping criterion throughout our experiments. For comparison, we also present the result without the penalty term (when
), which is direct and does not require iterations, but followed by converting from
to p
using
, as in [11], where
and
as
the machine epsilon. As seen, the few iteration needed for the nonnegativity constraints indeed have helped to improve the overall quality of the recovered coefficient p.
We then test on the nonlinear least-squares method (3.11) - (3.12), with the result of one such example presented in Figure 2. We choose
in
Figure 1. Recovered p by the linear least-squares method with nonnegativity constraints (solid), and by the direct method with truncation (dotted), in comparison with the true profile (dashed).
Figure 2. Recovered p by the nonlinear least-squares method, with nonnegativity constraints (solid) and without (dotted), in comparison with the true profile (dashed).
this case. Typically it takes about 10 iterations from initial
to satisfy the stopping criterion in our experiments. For comparison, we also carry out parallel calculations without the penalty term (by setting
), and as expected, the resulting solution assumes some negative values due to the absence of nonnegativity constraints. As can be seen, adding the penalty term not only ensures the nonnegativity of the recovered coefficient p, but also improves its quality.
Throughout our numerical experiments, we have consistently observed the effectiveness of this approach in dealing with the ill-posedness of the Robin inverse problem and in improving recovery results, which is achieved within the same iterative framework and without additional computational cost.
We conclude with some final remarks: 1) It is possible to fine-tune these least-squares methods by introducing different weights between the model equation residual and the data fitting residual in (3.4) or (3.9), but at the expense of one more parameter (the weight ratio) to adjust, as studied in [14]. Even in this setting, the penalty strategy proposed here remains readily applicable, and the penalty parameter
is generally less sensitive and easier to determine than the weight ratio and regularization parameter; 2) In the more common approach where the dependence of u on p is determined exactly by solving the model equation (1.1) or (2.2) and only the data fitting residual is to be minimized, it is still possible to include the penalty strategy proposed here to better facilitate the solution process for maintaining nonnegativity of p, and it is reasonable to expect the improvement of quality of the recovered p in more ill-posed situations, as we have experienced and reported here.
Acknowledgements
This work was supported by the key research projects of general universities in Guangdong Province (2019KZDXM034), and the basic research and applied basic research projects in Guangdong Province (Projects of Guangdong-Hong Kong-Macao Center for Applied Mathematics) (2020B1515310018).