Constrained Shape Derivative for the Spectral Laplace-Dirichlet Problem Using the Minimax Method

Abstract

In shape optimization, we often seek to optimize the shape of a domain in order to achieve a goal. For example, the sound of a drum depends on the shape of its membrane, via the eigenvalues of the Laplacian. We use the shape derivative to adjust this shape. The shape derivative for an eigenvalue problem is frequently used in shape optimisation, in particular when we want to minimise or maximise an eigenvalue, often the first of an elliptic operator, typically the Laplacian, under geometric constraints. This paper gives a simple method for calculating the shape derivative of a Dirichlet eigenvalue, but also the shape derivative of an objective function constrained to an eigenvalue problem.

Share and Cite:

Ngom, M. , Kourouma, B. and Faye, I. (2025) Constrained Shape Derivative for the Spectral Laplace-Dirichlet Problem Using the Minimax Method. Journal of Applied Mathematics and Physics, 13, 2377-2394. doi: 10.4236/jamp.2025.137136.

1. Introduction

In modern applied mathematics, more and more problems involve not just finding a function that satisfies an equation, but determining the very shape of the domain in which that function lives. These shapes are no longer fixed: they become variables in their own right, malleable and deformable, which we seek to adjust to achieve a given objective. Imagine a musical instrument, a drum for example. Its timbre depends closely on the shape of its membrane. Altering its contour, even slightly, changes its sound. Behind this acoustic intuition lies a profound mathematical truth: the eigenvalues of the Laplacian, which govern the frequencies of vibration, depend on the geometry of the domain. And if we want to optimise this geometry to obtain a deeper, higher or purer sound, we need to know how these eigenvalues change when we modify the shape of the domain. This is where the shape derivative comes in. There are several works in the literature on the shape derivative of the first eigenvalue. These include [1]-[6]. Here, we use other techniques to calculate the shape derivative of the first eigenvalue for the Laplace-Dirichlet problem. Next, we calculate the shape derivative of an objective function, taking the spectral problem as a constraint. Readers interested in the approach developed can consult the work of M. C. Delfour [7] [8] or our own [9] [10]. The shape derivative is a mathematical tool used to study the infinitesimal variation of a functional J( Ω ) when the domain is slightly deformed. This deformation is generally given by a vector field which defines a family of domains Ω ϵ =( I+ϵV )( Ω ) , for ϵ small or more generally Ω ϵ = T ϵ ( Ω ) where the family of diffeomorphisms ( T ϵ ) is induced by the vector field V . The shape derivative of J( Ω ) is then defined by

DJ( Ω,V ):= d dϵ J( Ω ϵ )| ϵ=0 = lim ϵ0 J( Ω ϵ )J( Ω ) ϵ .

This is the analogue of the directional derivative, but in the space of shapes (domains). The aim of this work is to calculate, using the min-max method [7], the shape derivative of the objective function J( Ω ) defined by

J( Ω )= Ω | ηA | 2 dx + Ω | η η d | 2 dx (1.1)

where η is the solution of the following eigenvalue problem

{ Δηλη=0inΩ η=0onΩ, (1.2)

where Ω is a regular open domain of N , A a given vector of N and η d a given function. The existence and uniqueness of this problem are detailed in [11]. Note that the problem under study has no source term. In other words, λ=λ( Ω ) is considered an eigenvalue and therefore depends on the domain Ω . The first step is to determine the derivative of this eigenvalue with respect to the domain. The rest of the manuscript is organized as follows: Section 2 presents a reminder of the methodology used to calculate the derivative. Section 3 is devoted to the study of the shape derivative of the volume function. Numerical simulations are proposed in Section 4, while the final section is devoted to results relating to the shape derivative in the context of the eigenvalue problem.

2. Methodology for Calculating Shape Derivatives

In this subsection, we describe how to calculate the topological derivative using the min-max approach, see e.g. [7] [9]. To begin with, we will look at the following definitions and notations.

Definition 2.1 A Lagrangian function is a function of the form

( ϵ,x,y )( ϵ,x,y ):[ 0,τ ]×X×Y,τ>0

where X is a vector espace, Y a non empty subset of vector space and the function y( ϵ,x,y ) is affine.

Associate with the parameter ϵ the parametrized minimax

ϵg( ϵ )= inf xX sup yY ( ϵ,x,y ):[ 0,τ ]anddg( 0 )= lim ϵ 0 + g( ϵ )g( 0 ) ϵ .

When the limits exist, we will use the following notations

d ϵ ( 0,x,y )= lim ϵ 0 + ( ϵ,x,y )( 0,x,y ) ϵ

φX, d x ( ϵ,x,y;φ )= lim θ 0 + ( ϵ,x+θφ,y )( ϵ,x,y ) θ

ϕY, d y ( ϵ,x,y;ϕ )= lim θ 0 + ( ϵ,x,y+θϕ )( ϵ,x,y ) θ .

The state equation at ϵ0

Find x ϵ XsuchthatforallψY, d y ( ϵ, x ϵ ,0;ψ )=0. (2.1)

The set of states x ϵ at ϵ0 is denoted

E( ϵ )={ x ϵ X,ψY, d y ( ϵ, x ϵ ,0;ψ )=0 }. (2.2)

The adjoint equation at ϵ0 is

Find p ϵ YsuchthatforallφX, d x ( ϵ, x ϵ , p ϵ ;φ )=0. (2.3)

The set of solutions p ϵ at ϵ0 is denoted

Y( ϵ, x ϵ )={ p ϵ Y,φX, d x ( ϵ, x ϵ , p ϵ ;φ )=0 }. (2.4)

Finally the set of minimisers for the minimax is given by

X( ϵ )={ x ϵ X,g( ϵ )= inf xX sup yY ( ϵ,x,y )= sup yY ( ϵ, x ϵ ,y ) }. (2.5)

Lemma 2.1 (Constrained infimum and minimax)

We have the following assertions

1) inf xX sup yY ( ϵ,x,y )= inf xE( ϵ ) ( ϵ,x,0 )

2) The minimax g( ϵ )=+ if and only if E( ϵ )= . And in this case we have X( ϵ )=X .

3) If E( ϵ ) , then

X( ϵ )={ x ϵ E( ϵ ):( ϵ, x ϵ ,0 )= inf xE( ϵ ) ( ϵ,x,0 ) }E( ϵ )

and g( ϵ )<+ .

Proof. See [7]. ■

First Results of the Proposed Approach

We need the following assumption for everything that follows:

Hypothesis (H0)

Let X be a vector space.

1) For all ϵ[ 0,τ ] , x 0 X( 0 ) , x ϵ X( ϵ ) and yY , the function θ( ϵ, x 0 +θ( x ϵ x 0 ),y ):[ 0,1 ] is absolutely continuous. This implies that for almost all θ the derivative exists and is equal to d x ( , x 0 +θ( x ϵ x 0 ),y; x ϵ x 0 ) and it is the integral of its derivative. In particular

( ϵ, x ϵ ,y )=( ϵ, x 0 ,y )+ 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ),y; x ϵ x 0 )dθ .

2) For all ϵ[ 0,τ ] , x 0 X( 0 ) , x ϵ X( ϵ ) and yY , ϕX and for almost all θ[ 0,1 ] , d x ( ϵ, x 0 +θ( x ϵ x 0 ),y;ϕ ) exist et the functions θ d x ( ϵ, x 0 +θ( x ϵ x 0 ),y;ϕ ) belong to L 1 [ 0,1 ]

Definition 2.2 Given x 0 X( 0 ) and x ϵ X( ϵ ) , the averaged adjoint equation is:

Find y ϵ Y,ϕX, 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ),y;ϕ )dθ =0

and the set of solutions is noted Y( ϵ, x 0 , x ϵ ) .

Y( 0, x 0 , x 0 ) clearly reduces to the set of standard adjoint states Y( 0, x 0 ) at ϵ=0 .

Theorem 2.2 Consider the Lagrangian functional

( ϵ,x,y )( ϵ,x,y ):[ 0,τ ]×X×Y,τ>0

where X and Y are vector spaces and the function y( ϵ,x,y ) is affine. Assume that (H0) and the following hypotheses are satisfied

(H1) for all ϵ[ 0,τ ] , g( ϵ ) is finite, X( ϵ )={ x ϵ } and Y( 0, x 0 )={ p 0 } are singletons,

(H2) d ϵ ( 0, x 0 , y 0 ) exists,

(H3) The following limit exists

( x 0 , y 0 )= lim ϵ 0 + 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ), p 0 ; x ϵ x 0 ϵ )dθ.

Then, dg( 0 ) exists and dg( 0 )= d ϵ ( 0, x 0 , p 0 )+( x 0 , p 0 ) .

Proof. Recall that g( ϵ )=( ϵ, x ϵ ,y ) and g( 0 )=( 0, x 0 ,y ) for each yY , then for a standard adjoint state p 0 at ϵ=0

g( ϵ )g( 0 )=( ϵ, x ϵ , p 0 )( ϵ, x 0 , p 0 )+( ( ϵ, x 0 , p 0 )( 0, x 0 , p 0 ) ).

Dividing by ϵ>0 , we obtain

g( ϵ )g( 0 ) ϵ = ( ϵ, x ϵ , p 0 )( ϵ, x 0 , p 0 ) ϵ + ( ϵ, x 0 , p 0 )( 0, x 0 , p 0 ) ϵ = 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ), p 0 ; x ϵ x 0 ϵ )dθ + ( ϵ, x 0 , p 0 )( 0, x 0 , p 0 ) ϵ .

Going to the limit when ϵ goes to zero, we obtain

dg( 0 )= lim ϵ 0 + 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ), p 0 ; x ϵ x 0 ϵ )dθ+ d ϵ ( 0, x 0 , p 0 ) = d ϵ ( 0, x 0 , p 0 )+( x 0 , p 0 ).

Corollary 2.3 Consider the Lagrangian functional

( ϵ,x,y )( ϵ,x,y ):[ 0,τ ]×X×Y,τ>0

where X and Y are vector spaces and the function y( ϵ,x,y ) is affine. Assume that (H0) and the following assumptions are satisfied:

(H1a) for all ϵ[ 0,τ ] , X( ϵ ) , g( ϵ ) is finite, and for each xX( 0 ) , Y( 0,x ) ,

(H2a) for all xX( 0 ) and pY( 0,x ) d ϵ ( 0,x,p ) exists,

(H3a) there exist x 0 X( 0 ) and p 0 Y( 0, x 0 ) such that the following limit exists

( x 0 , p 0 )= lim ϵ 0 + 0 1 d x ( ϵ, x 0 +θ( x ϵ x 0 ), p 0 ; x ϵ x 0 ϵ )dθ.

Then, dg( 0 ) exists and there exist x 0 X( 0 ) and p 0 Y( 0, x 0 ) such that dg( 0 )= d ϵ ( 0, x 0 , p 0 )+( x 0 , p 0 ) .

Proposition 2.4 Let X and Y be two vector spaces and :[ 0,τ ]×X×Y be a Lagrangian. If the function y( ϵ,x,y ) is affine, then for all y,ψY , we have

d y ( ϵ,x,y;ψ )=( ϵ,x,ψ )( ϵ,x,0 )= d y ( ϵ,x,0;ψ ).

Proof. The Lagrangian is affine in y means

( ϵ,x,y )= 1 ( ϵ,x )+ 2 ( ϵ,x,y ) , où 2 is linear in y .

Since 1 does not depend on y , we have

d y ( ϵ,x,y;ψ )= d y 2 ( ϵ,x,y;ψ ).

By definition, the following applies

d y 2 ( ϵ,x,y;ψ )= lim θ0 2 ( ϵ,x,y+θψ ) 2 ( ϵ,x,y ) θ = lim θ0 2 ( ϵ,x,y )+θ 2 ( ϵ,x,ψ ) 2 ( ϵ,x,y ) θ ,because 2 islineariny = lim θ0 θ 2 ( ϵ,x,ψ ) θ = 2 ( ϵ,x,ψ ).

So for all yY

d y ( ϵ,x,y;ψ )= d y 2 ( ϵ,x,y;ψ )= 2 ( ϵ,x,ψ ), (2.6)

and it follows that

d y ( ϵ,x,0;ψ )= d y 2 ( ϵ,x,y;ψ )= 2 ( ϵ,x,ψ ). (2.7)

Now let us calculate ( ϵ,x,ψ )( ϵ,x,0 ) .

( ϵ,x,ψ )( ϵ,x,0 )= 1 ( ϵ,x )+ 2 ( ϵ,x,ψ ) 1 ( ϵ,x ) 2 ( ϵ,x,0 ) = 2 ( ϵ,x,ψ ) 2 ( ϵ,x,0 ).

But 2 ( ϵ,x,0 )=0 because 2 is linear in y and it follows

( ϵ,x,ψ )( ϵ,x,0 )= 2 ( ϵ,x,ψ ). (2.8)

So the relationships (2.6), (2.7) and (2.8) give the result. ■

3. Shape Derivative for the Volume Function via Min-Max Approach

The following example is a special case where we consider a shape functional independent of a PDE constraint. It is given in [5] by

J( Ω )=| Ω |λ( Ω ) (3.1)

| Ω | is the volume of Ω et λ( Ω )=λ is the first eigenvalue of the following eigenvalue problem

Δη=ληinΩandη=0onΩ. (3.2)

This problem has a sequence of real positive eigenvalues:

0< λ 1 ( Ω )< λ 2 ( Ω ),

with associated eigenfunctions η i H 0 1 ( Ω ) , orthonormal in L 2 ( Ω ) , i.e.

Ω η i η j dx = δ ij .

In this case, we need the volume expression of the derivative of the eigenvalue. For this we have the following proposition.

Proposition 3.1 Let Ω be a bounded open domain. Assume that the first eigenvalue λ=λ( Ω ) is simple. Then λ( Ω ) is shape differentiable and the Eulerian derivative is

Dλ( Ω,V )= λ ( 0 ) = Ω ( 2 η 0 DV( 0 ) | η 0 | 2 +divV( 0 )( | η 0 | 2 λ 0 | η 0 | 2 ) )dx . (3.3)

If, in addition, Ω is convex or of class C 2 , then

Dλ( Ω,V )= λ ( 0 )= Ω ( η n ) 2 Vndσ . (3.4)

Proof. A proof of this proposition can also be found in the work of A. Henrot and M. Pierre [4] or that of S. Zhu [5]. We only prove the derivative formula in volume form (3.3). The reader can also consult [6]. In this proof, we will therefore use a rigorous demonstration based on the min-max method introduced by M. C. Delfour in [7]. To do this, we consider the diffeomorphism family ( T ϵ ) ϵ0 and define the perturbed domain by Ω ϵ = T ϵ ( Ω ) . The perturbed state η ϵ is the solution of the variational problem

Ω ϵ η ϵ v dx = Ω ϵ λ ϵ η ϵ v dx , v H 0 1 ( Ω ϵ ).

A change of variables allows the problem to be transported to the reference domain Ω . Thus, noting by η ϵ = η ϵ T ϵ and λ ϵ = λ ϵ T ϵ , η ϵ is the solution of the following variational equation

Ω B( ϵ ) η ϵ vdx = Ω λ ϵ η ϵ vJ( ϵ )dx . (3.5)

For φ= η ϵ

Ω B( ϵ )φvdx = Ω λ ϵ φvJ( ϵ )dx . (3.6)

Deriving with respect to ϵ and taking v=φ , we obtain

Ω B ( ϵ ) | φ | 2 dx = Ω ( λ ϵ ) | φ | 2 J( ϵ )dx + Ω ( λ ϵ ) | φ | 2 divV( ϵ )J( ϵ )dx . (3.7)

So at ϵ=0 with φ= η 0 .

Ω B ( 0 ) | η 0 | 2 dx = Ω ( λ ϵ ) ( 0 ) | η 0 | 2 dx + Ω λ 0 | η 0 | 2 divV( 0 )dx. (3.8)

Using the normalization condition, we have

( λ ϵ ) ( 0 )= Ω B ( 0 ) | η 0 | 2 dx Ω λ 0 | η 0 | 2 divV( 0 )dx (3.9)

with B ( 0 )=divV( 0 )DV ( 0 ) * DV( 0 ) . Using the fact that

( DV ( 0 ) * +DV( 0 ) ) | η 0 | 2 =2 η 0 DV | η 0 | 2

( λ ϵ ) ( 0 )= Ω ( 2 η 0 DV( 0 ) | η 0 | 2 +divV( 0 )( | η 0 | 2 λ 0 | η 0 | 2 ) )dx . (3.10)

Proposition 3.2 Let J be the objective function defined by J( Ω )=| Ω |λ( Ω ) . Then the derivative of J in Ω in the direction V is given by

DJ( Ω,V ) = Ω [ | Ω |( 2 η 0 DV | η 0 | 2 +divV( | η 0 | 2 λ 0 | η 0 | 2 ) )+ λ 0 divV ]dx . (3.11)

Proof. If we consider the objective function

J( Ω )=| Ω |λ( Ω ), (3.12)

first, we know that the derivative of the volume functional is given by

| Ω |'( Ω,V( 0 ) )= Ω divV( 0 )dx .

Indeed, by definition of the shape derivative

| Ω |'( Ω,V( 0 ) )= lim ϵ0 | Ω ϵ || Ω | ϵ = lim ϵ0 1 ϵ ( Ω ϵ 1dx Ω 1dx ).

We use a change of variables to transform the domain Ω ϵ into Ω

| Ω |'( Ω,V( 0 ) )= lim ϵ0 Ω J( ϵ )1 ϵ dx = lim ϵ0 Ω J( ϵ )J( 0 ) ϵ dx = Ω J ( 0 )dx = Ω divV( 0 )T( 0 )J( 0 )dx .

Since T( 0 )=I and J( 0 )=1 , we have well | Ω |'( Ω,V( 0 ) )= Ω divV( 0 )dx . And finally, the derivative of the objective function J is given by

DJ( Ω,V )= Ω [ | Ω |( 2 η 0 DV | η 0 | 2 +divV( | η 0 | 2 λ 0 | η 0 | 2 ) )+ λ 0 divV ]dx .

4. Numerical Tests

In shape optimization, the aim is to find the most efficient shape for a certain criterion in an admissible set of domains. In this section, we present numerical tests for minimizing the objective function defined in (3.1) taking into account that λ is the first eigenvalue of the operator Δ .

Shape Optimization Implementation Details

The interest of the shape derivative is to give a direction of descent to evolve the domain in order to minimize the eigenvalue. The idea is to find a vector field V that gives a decrease in the objective function, i.e. we want to find a vector field V such that DJ( Ω,V )0 . This can be achieved by solving an auxiliary limit problem of the type

FindV;b( V,ϕ )=DJ( Ω,ϕ )forallϕ

for a suitable Herbert space . b( , ):× is a positive bilinear form that we are free to choose. Then V is a descent direction because

DJ( Ω,V )=DJ( Ω,V )=b( V,V )<0.

We present a shape gradient descent algorithm, based on a volume expression rather than a boundary expression. In this case, it will be essential to update the finite element mesh after each iteration. To this end, we follow the approaches developed by V. H. Schulz, M. Siebenborn, K. Welker, [12] using the linear elasticity equation given as follows

div( σ )= f elas inΩ

u=0onΩ

σ:=λTr( ϵ )I+2μϵ

ϵ:= 1 2 ( u+ u T )

where σ and ϵ are the deformation and continuity tensors, I is the identity tensor, Tr is the trace operator on a tensor and u is the displacement vector field [12] [13]. β and μ designate the Lamé parameters, which can be expressed in terms of Young's modulus E and Poisson’s ratio ν as follows

β= νE ( 1+ν )( 12ν ) ,μ= E 2( 1+ν ) .

The Young’s modulus E and Poisson’s ratio ν have effects on mesh deformation. E indicates the stiffness of the material, allowing us to control the step size for shape updating, while ν gives the ratio between mesh expansion in other coordinate directions and compression in a particular direction. These coefficients can be found, for instance, in [12] [13]. The bilinear form of the linear elasticity equation is given by

a( u,v )= Ω σ ( u ):ϵ( v )dx.

Thanks to the Riesz representation theorem, we obtain the direction of descent or the shape gradient or the solution of the linear elasticity equation U:Ω 2 , which can be solved as follows

Ω σ ( U ):ϵ( V )dx=DJ( Ω,V ),V H 0 1 ( Ω, 2 ).

The shape functional is not subject to any PDE constraints. It is therefore sufficient to solve the state equation and the deformation equation at each iteration step using the following algorithm:

Algorithm 1

Initialization

while DJ( Ω k ,V ) > ϵ TOL do

1. Calculate the solution η k of the spectral problem.

2. Calculate the gradient W k [via DJ( Ω,V ) and Linear elasticity].

3. Deforming the domain Ω k in Ω k+1 [via ALE.move and W k ].

end while

First, it is worth noting that all numerical simulations are conducted in two dimensions within the FEniCS framework, see for instance [13] to find out more. The algorithm used here is a slight modification of the one proposed in [6], in which λ is assigned a value. In our case, since λ is an eigenvalue, it is calculated directly. In the absence of a partial differential equation constraint, each iteration consists of solving the equation of state and the system associated with the direction of descent, or simply the linear elasticity equation. As we shall see, obtaining the new domain shape relies solely on moving the points on the boundary. We implement the general problem of minimizing the functional (3.1) depending on λ , eigenvalue for (3.2). For each iteration, we need to solve the state equation (3.2). The shape derivative we use is given by the formula (3.11). To define the Lame parameters, we fixed the values E=0.1 and ν=0.4 throughout the numerical tests. We set the time step to 0.001. We follow the Algorithm 1 to obtain a decrease in the functional introduced in (3.1). Tests were first carried out for 100 iterations, then extended to 500 and finally 1000 iterations. The convergence of the algorithm depends on the tolerance chosen which is set here at 0.088. The initial mesh is the unit square, as shown in Figure 1.

After some numerical testing, the vector fields look like this: Figure 2 shows the vector fields for 100, 500 and 1000 iterations.

Figure 3, Figure 4 and Figure 5 show the optimized domains, the solution of the state equation and the evolution of the objective function for 100, 500 and 1000 iterations respectively.

Figure 1. Initial mesh.

Figure 2. Vectors fields after 100, 500, and 1000 iterations.

Figure 3. (1): Optimized domain, (2): State, (3): Objective function for 100 iterations.

Figure 4. (1): Optimized domain, (2): State, (3): Objective function for 500 iterations.

Figure 5. (1): Optimized domain, (2): State, (3): Objective function for 1000 iterations.

We also carried out tests with the unit disk. Given the slow rate of convergence, the number of iterations was limited to 500. The simulation results are presented in Figure 6 below.

Figure 6. (1): Optimized domain, (2): State, (3): Objective function for 500 iterations.

5. Shape Derivative of an Eigenvalue Problem Using a Min-Max Approach

In this section, we focus on calculating the shape derivative in the presence of a spectral problem as a constraint. We use the same computational techniques developed in Section 2. The difference is that here, the domain-dependent λ=λ( Ω ) is considered as the first eigenvalue of the Dirichlet Laplacian.

Let Ω be a bounded lipschitzian open domain in N with a regular boundary Ω . Let η=η( Ω ) be the solution to the following spectral problem

Δηλη=0inΩ,η=0onΩ. (5.1)

is considered here as the first eigenvalue of the Laplacian. We consider the functional J defined by

J( Ω )= Ω | ηA | 2 dx + Ω | η η d | 2 dx (5.2)

where η is the solution of (5.1), A a given vector of N and η d a given function. As in the case of the HeLmholtz equation [9], we want to calculate the shape derivative of the function J . Recall that the shape derivative of J in Ω in the direction V is defined by

DJ( Ω,V )= lim ϵ0 J( Ω ϵ )J( Ω ) ϵ

when the limit exists where Ω ϵ = T ϵ ( Ω ) . We will also follow M. C. Delfour [7] [8] to calculate the shape derivative of the J functional. The functional in the domain Ω ϵ is then defined by

J( Ω ϵ )= Ω ϵ | η ϵ A | 2 dx + Ω ϵ | η ϵ η d | 2 dx (5.3)

where η ϵ is the solution to the problem

Δ η ϵ λ ϵ η ϵ =0in Ω ϵ , η ϵ =0on Ω ϵ . (5.4)

Ω ϵ is the boundary of Ω ϵ . The variational formulation of (5.4) is to find η ϵ H 0 1 ( Ω ϵ ) such that

Ω ϵ η ϵ v λ ϵ η ϵ v dx =0forall v H 0 1 ( Ω ϵ ). (5.5)

Note that this problem is not trivial, as the operator acts on domains that change Ω ϵ . The eigenfunctions η ϵ live in different spaces H 0 1 ( Ω ϵ ) . The normalization η ϵ L 2 ( Ω ϵ ) =1 also depends on ϵ . To solve this problem, we transport the domain onto a fixed Ω domain, via the change of variables x= T ϵ 1 ( y ) . This will enable us to manipulate objects in a fixed frame. To this end, we introduce the following parameterization

H 0 1 ( Ω ϵ )={ φ T ϵ 1 ,φ H 0 1 ( Ω ) }. (5.6)

and to work in H 0 1 ( Ω ) , we also introduce the compositions η ϵ = η ϵ T ϵ , λ ϵ = λ ϵ T ϵ and v= v T ϵ . So the variational formulation of (5.4) amounts to looking for η ϵ H 0 1 ( Ω ) such that

Ω ϵ ( η ϵ T ϵ 1 )( v T ϵ 1 )( λ ϵ T ϵ 1 )( η ϵ T ϵ 1 )( v T ϵ 1 )dx=0,v H 0 1 ( Ω ). (5.7)

Applying the change of variables formula, we obtain

Ω { ( η ϵ T ϵ 1 )( v T ϵ 1 )( λ ϵ T ϵ 1 )( η ϵ T ϵ 1 )( v T ϵ 1 ) } T ϵ J( ϵ )dx =0.

In addition, we have

Ω { ( λ ϵ T ϵ 1 )( η ϵ T ϵ 1 )( v T ϵ 1 ) } T ϵ J( ϵ )dx = Ω ( λ ϵ T ϵ 1 T ϵ )( η ϵ T ϵ 1 T ϵ )( v T ϵ 1 T ϵ )J( ϵ )dx = Ω λ ϵ η ϵ vJ( ϵ )dx .

Let D T ϵ ( X ) be the Jacobian matrix of T ϵ evaluated in X ,   * D T ϵ ( X ) the transpose matrix of D T ϵ ( X ) , then we have the following proposition

Proposition 5.1 We have

1) ( ϕ ) T ϵ =  D T ϵ 1 ( ϕ T ϵ ),ϕ C 1 ( N ),

2) D( TS )={ ( DT )S }DS,( T,S ) C 1 ( N ; N ).

Proof. See [1]. ■

So using the proposition, we can write

Ω { ( η ϵ T ϵ 1 )( v T ϵ 1 ) } T ϵ J( ϵ )dx = Ω D T ϵ 1 ( D T ϵ 1 ) * η ϵ vJ( ϵ )dx = Ω B( ϵ ) η ϵ vdx .

Finally, the previous equation becomes

Ω B( ϵ ) η ϵ vdx Ω λ ϵ η ϵ vJ( ϵ )dx =0 (5.8)

with B( ϵ )=J( ϵ )D T ϵ 1 ( D T ϵ 1 ) * , J( ϵ )=detD T ϵ , D T ϵ the Jacobian matrix of T ϵ .

To write the functional in the fixed domain Ω , we first assume that A is written A== η ˜ and note by η ˜ ϵ = η ˜ T ϵ .

J( Ω ϵ )= Ω ϵ | η ϵ A | 2 dx + Ω ϵ | η ϵ η d | 2 dx = Ω ϵ | η ϵ η ˜ | 2 dx + Ω ϵ | η ϵ η d | 2 dx = Ω ϵ ( η ϵ η ˜ )( η ϵ η ˜ )dx + Ω ϵ | η ϵ η d | 2 dx = Ω { ( η ϵ η ˜ )( η ϵ η ˜ ) } T ϵ J( ϵ )dx + Ω | η ϵ T ϵ η d T ϵ | 2 J( ϵ )dx .

Applying Proposition 5.1 once more, we obtain

g( ϵ )=J( Ω ϵ )= Ω { B( ϵ )( η ϵ η ˜ ϵ ) }( η ϵ η ˜ ϵ )dx + Ω | η ϵ η d T ϵ | 2 J( ϵ )dx . (5.9)

Since the set of perturbed states admits a solution, then g( ϵ )<+ . So the ϵ -dependent Lagrangian can be defined as follows:

( ϵ,φ,ϕ )= Ω { B( ϵ )( φ η ˜ ϵ ) }( φ η ˜ ϵ )+ | φ η d T ϵ | 2 J( ϵ )dx + Ω B( ϵ )φϕ λ ϵ φϕJ( ϵ )dx

g( ϵ )= inf φ H 0 1 ( Ω ) sup ϕ H 0 1 ( Ω ) ( ϵ,φ,ϕ );dg( 0 )= lim ϵ0 ( g( ϵ )g( 0 ) )/ϵ =DJ( Ω,V( 0 ) ).

Let R( ϵ ) be the following function

R( ϵ )= Ω 2 ( B( ϵ )I ϵ ){ ( η ϵ + η 0 2 ) η ˜ ϵ }( η ϵ η 0 )dx +ϵ Ω | ( η ϵ η 0 ϵ ) | 2 dx + Ω 2( J( ϵ )1 ϵ )( η ϵ + η 0 2 η d T ϵ )( η ϵ η 0 )dx Ω 2( η ˜ ϵ η ˜ )( η ϵ η 0 ϵ )dx Ω 2( η d T ϵ η d )( η ϵ η 0 ϵ )dx +ϵ Ω | η ϵ η 0 ϵ | 2 dx + Ω B( ϵ )I ϵ p 0 ( η ϵ η 0 ) λ 0 ( J( ϵ )1 ϵ ) p 0 ( η ϵ η 0 )dx Ω J( ϵ ) ( λ ϵ λ 0 ϵ ) p 0 ( η ϵ η 0 )dx.

We assume that the function R( ϵ ) admits a finite limit noted R( η 0 , p 0 ) , then we have the following proposition.

Proposition 5.2 Let Ω be the solution to the problem min Ω O ad J( Ω ) , then the shape derivative of the functional J( Ω ) defined in (5.2) is given by

DJ( Ω,V )=( η 0 , p 0 )+ Ω { B ( 0 )( η 0 η ˜ ) }( η 0 η ˜ )dx Ω 2( η 0 η ˜ )( [ η ˜ V( 0 ) ] )dx + Ω | η 0 η d | 2 divV( 0 )2( η 0 η d ) η d V( 0 )dx + Ω B ( 0 ) η 0 p 0 λ 0 η 0 p 0 divV( 0 ) ( λ ϵ ) ( 0 ) η 0 p 0 dx.

where η 0 solution of (5.1) and p 0 , the adjoint state solution of

Ω 2( η 0 η ˜ ) φ +2( η 0 η d ) φ + p 0 φ λ 0 p 0 φ dx =0.

Proof. The derivative of the ϵ -dependent Lagrangian with respect to ϵ .

d ϵ ( ϵ,φ,ϕ )= Ω { B ( ϵ )( φ η ˜ ϵ ) }( φ η ˜ ϵ )dx Ω 2B( ϵ )( φ η ˜ ϵ )( [ η ˜ V( ϵ ) ] T ϵ )dx + Ω | φ η d T ϵ | 2 div V ϵ J( ϵ )2( φ η d T ϵ ) η d V ϵ J( ϵ )dx + Ω B( ϵ )φϕ λ ϵ φϕdiv V ϵ J( ϵ ) ( λ ϵ ) φϕJ( ϵ )dx .

Taking φ= η 0 and ϕ= p 0 , we obtain

d ϵ ( ϵ, η 0 , p 0 )= Ω { B ( ϵ )( η 0 η ˜ ϵ ) }( η 0 η ˜ ϵ )dx Ω 2B ( ϵ )( η 0 η ˜ ϵ )( [ η ˜ V( ϵ ) ] T ϵ )dx + Ω | η 0 η d T ϵ | 2 div V ϵ J( ϵ )2( η 0 η d T ϵ ) η d V ϵ J( ϵ )dx + Ω B ( ϵ ) η 0 p 0 λ 0 η 0 p 0 div V ϵ J( ϵ ) ( λ ϵ ) ( 0 ) η 0 p 0 J ϵ dx.

Taking ϵ equal to zero, we get

d ϵ ( 0, η 0 , p 0 )= Ω { B ( 0 )( η 0 η ˜ ) }( η 0 η ˜ )dx Ω 2 ( η 0 η ˜ )( [ η ˜ V( 0 ) ] )dx + Ω | η 0 η d | 2 divV( 0 )2( η 0 η d ) η d V( 0 )dx + Ω B ( 0 ) η 0 p 0 λ 0 η 0 p 0 divV( 0 ) ( λ ϵ ) ( 0 ) η 0 p 0 dx.

The derivative of the ϵ -dependent Lagrangian with respect to φ in the direction φ :

d φ ( ϵ,φ,ϕ; φ )= Ω 2B ( ϵ )( φ η ˜ ϵ ) φ +2( φ η d T ϵ ) φ J( ϵ )dx + Ω B ( ϵ ) φ ϕ λ ϵ φ ϕJ( ϵ )dx.

The derivative of the ϵ -dependent Lagrangian with respect to ϕ in the direction ϕ :

d ϕ ( ϵ,φ,ϕ; ϕ )= Ω B ( ϵ )φ ϕ λ ϵ φ ϕ J( ϵ )dx. (5.10)

The state equation at ϵ0 and the adjoint state equation at ϵ=0 are respectively given by

η ϵ H 0 1 ( Ω ), ϕ H 0 1 ( Ω ), Ω B( ϵ ) η ϵ ϕ λ ϵ η ϵ ϕ J( ϵ )dx =0 (5.11)

p 0 H 0 1 ( Ω ), φ H 0 1 ( Ω ),

Ω 2 ( η 0 η ˜ ) φ +2( η 0 η d ) φ + p 0 φ λ 0 p 0 φ dx=0. (5.12)

( ϵ )= 0 1 d φ ( ϵ, η 0 +θ( η ϵ η 0 ), p 0 ; η ϵ η 0 ϵ )dθ = Ω 2B ( ϵ ){ ( η ϵ + η 0 2 ) η ˜ ϵ }( η ϵ η 0 ϵ )dx + Ω 2 ( η ϵ + η 0 2 η d T ϵ )( η ϵ η 0 ϵ )J( ϵ )dx + Ω B ( ϵ ) p 0 ( η ϵ η 0 ϵ ) λ ϵ p 0 ( η ϵ η 0 ϵ )J( ϵ )dx.

Taking φ = η ϵ η 0 ϵ in adjoint Equation (5.12), we have

Ω 2 ( η 0 η ˜ )( η ϵ η 0 ϵ )+2( η 0 η d )( η ϵ η 0 ϵ )dx + Ω p 0 ( η ϵ η 0 ϵ ) λ 0 p 0 ( η ϵ η 0 ϵ )dx =0.

Consequently, we get

Ω 2{ ( η ϵ + η 0 2 )( η ϵ η 0 2 ) η ˜ ϵ + η ˜ ϵ η ˜ }( η ϵ η 0 ϵ )dx + Ω 2( η ϵ + η 0 2 η ϵ η 0 2 η d T ϵ + η d T ϵ η d )( η ϵ η 0 ϵ )dx + Ω p 0 ( η ϵ η 0 ϵ ) λ 0 p 0 ( η ϵ η 0 ϵ )dx =0.

In other words,

Ω 2{ ( η ϵ + η 0 2 ) η ˜ ϵ }( η ϵ η 0 ϵ )dx ϵ Ω | ( η ϵ η 0 ϵ ) | 2 dx + Ω 2( η d T ϵ η d )( η ϵ η 0 ϵ )dx + Ω 2( η ˜ ϵ η ˜ )( η ϵ η 0 ϵ )dx + Ω 2( η ϵ + η 0 2 η d T ϵ )( η ϵ η 0 ϵ )dx ϵ Ω | η ϵ η 0 ϵ | 2 dx + Ω p 0 ( η ϵ η 0 ϵ ) λ 0 p 0 ( η ϵ η 0 ϵ )dx =0.

Accordingly, the function R( ϵ ) can be rewritten as follows:

( ϵ )= Ω 2( B( ϵ )I ϵ ){ ( η ϵ + η 0 2 ) η ˜ ϵ }( η ϵ η 0 )dx +ϵ Ω | ( η ϵ η 0 ϵ ) | 2 dx + Ω 2 ( J( ϵ )1 ϵ )( η ϵ + η 0 2 η d T ϵ )( η ϵ η 0 )dx Ω 2( η ˜ ϵ η ˜ )( η ϵ η 0 ϵ )dx Ω 2( η d T ϵ η d )( η ϵ η 0 ϵ )dx +ϵ Ω | η ϵ η 0 ϵ | 2 dx + Ω B( ϵ )I ϵ p 0 ( η ϵ η 0 ) ( λ ϵ J( ϵ ) λ 0 ϵ ) p 0 ( η ϵ η 0 )dx.

( ϵ )= Ω 2( B( ϵ )I ϵ ){ ( η ϵ + η 0 2 ) η ˜ ϵ }( η ϵ η 0 )dx +ϵ Ω | ( η ϵ η 0 ϵ ) | 2 dx + Ω 2 ( J( ϵ )1 ϵ )( η ϵ + η 0 2 η d T ϵ )( η ϵ η 0 )dx Ω 2 ( η ˜ ϵ η ˜ )( η ϵ η 0 ϵ )dx Ω 2 ( η d T ϵ η d )( η ϵ η 0 ϵ )dx +ϵ Ω | η ϵ η 0 ϵ | 2 dx + Ω B( ϵ )I ϵ p 0 ( η ϵ η 0 ) λ 0 ( J( ϵ )1 ϵ ) p 0 ( η ϵ η 0 )dx Ω J( ϵ ) ( λ ϵ λ 0 ϵ ) p 0 ( η ϵ η 0 )dx.

As in the case of the Helmholtz equation studied in [9], we have the following inequality

( ϵ ) 2 B( ϵ )I ϵ ( η ϵ + η 0 2 ) η ˜ ϵ ( η ϵ η 0 ) +ϵ ( η ϵ η 0 ϵ ) 2 +2 J( ϵ )1 ϵ η ϵ + η 0 2 η d T ϵ η ϵ η 0 +2 η ˜ ϵ η ˜ ( η ϵ η 0 ϵ ) +2 η d T ϵ η d ϵ η ϵ η 0 +ϵ η ϵ η 0 ϵ 2 + B( ϵ )I ϵ p 0 ( η ϵ η 0 ) + λ 0 J( ϵ )1 ϵ p 0 η ϵ η 0 + J( ϵ ) λ ϵ λ 0 ϵ p 0 η ϵ η 0 .

Now we need to show that the limit of ( ϵ ) exists. First, we know the terms ( B( ϵ )I )/ϵ and ( J( ϵ )1 )/ϵ are uniformly bounded. It will therefore suffice to show that η ϵ η 0 in H 0 1 ( Ω ) -strong and that the H 0 1 ( Ω ) -norm of ( η ϵ η 0 )/ϵ is bounded. But the difficulty lies in the fact that the eigenvalue λ ϵ depends on the variable ϵ and we have no information on whether ( λ ϵ λ 0 )/ϵ is bounded. So we just assume that ( ϵ ) admits a finite limit denoted R( η 0 , p 0 ) and in this case the shape derivative of the functional J( Ω ) is given by:

DJ( Ω,V )=R( η 0 , p 0 )+ Ω { B ( 0 )( η 0 η ˜ ) }( η 0 η ˜ )dx Ω 2 ( η 0 η ˜ )( [ η ˜ V( 0 ) ] )dx + Ω | η 0 η d | 2 divV( 0 ) 2( η 0 η d ) η d V( 0 )dx + Ω B ( 0 ) η 0 p 0 λ 0 η 0 p 0 divV( 0 ) ( λ ϵ ) ( 0 ) η 0 p 0 dx.

where η 0 solution of (5.1) and p 0 , the adjoint state solution of

Ω 2 ( η 0 η ˜ ) φ +2( η 0 η d ) φ + p 0 φ λ 0 p 0 φ dx=0.

6. Conclusion

This work provides a simple method for calculating the shape derivative in the context of Dirichlet eigenvalue problems. We have used alternative techniques to calculate the shape derivative of the first eigenvalue of the Laplace-Dirichlet problem, as well as that of an objective function constrained by a spectral problem. A fundamental tool in shape optimization, the shape derivative enables us to analyze how a quantity of interest varies under the effect of small deformations of the domain. This approach, inspired in particular by the work of Delfour, opens the way to a better understanding and control of geometry-dependent spectral problems. Some numerical tests were also carried out in two dimensions. The next step is to calculate the topological derivative using the same approach for an eigenvalue problem.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Delfour, M. and Zolésio, J.P. (2001) Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization (Advances in Design and Control). SIAM.
[2] Henrot, A. (2006) Extremum Problems for Eigenvalues of Elliptic Operators. Birkha üser Verlag.
https://doi.org/10.1007/3-7643-7706-2
[3] Henrot, A. (2017) Shape Optimization and Spectral Theory. De Gruyter, 464.
[4] Henrot, A. and Pierre, M. (2006) Variation et optimisation de formes: Une analyse géométrique, volume 48. Springer Science & Business Media.
https://doi.org/10.1007/3-540-37689-5
[5] Zhu, S. (2017) Effective Shape Optimization of Laplace Eigenvalue Problems Using Domain Expressions of Eulerian Derivatives. Journal of Optimization Theory and Applications, 176, 17-34.
https://doi.org/10.1007/s10957-017-1198-9
[6] Ngom, M.G., Faye, I. and Seck, D. (2024) On Shape and Topological Optimization Problems with Constraints Helmholtz Equation and Spectral Problems. Results in Applied Mathematics, 23, Article ID: 100474.
https://doi.org/10.1016/j.rinam.2024.100474
[7] Delfour, M.C. (2018) Control, Shape, and Topological Derivatives via Minimax Differentiability of Lagrangians. In: Falcone, M., Ferretti, R., Grüne, L. and McEneaney, W., Eds., Numerical Methods for Optimal Control Problems, Springer, 137-164.
https://doi.org/10.1007/978-3-030-01959-4_7
[8] Delfour, M.C. (2021) Topological Derivatives via One-Sided Derivative of Parametrized Minima and Minimax. Engineering Computations, 39, 34-59.
https://doi.org/10.1108/ec-06-2021-0318
[9] Ngom, M.G., Faye, I. and Seck, D. (2023) A Minimax Method in Shape and Topological Derivatives and Homogenization: The Case of Helmholtz Equation. Nonlinear Studies, 30, page.
[10] Ngom, M.G., Faye, I. and Seck, D. (2025) Topological Derivative for Shallow Water Equations. Nonlinear Studies, 32, page.
[11] Evans, L.C. (1998) Partial Differential Equations, Graduate Studies in Mathematics. American Mathematical Society, 19.
[12] Schulz, V.H., Siebenborn, M. and Welker, K. (2016) Efficient PDE Constrained Shape Optimization Based on Steklov—Poincaré-Type Metrics. SIAM Journal on Optimization, 26, 2800-2819.
https://doi.org/10.1137/15m1029369
[13] Langtangen, H.P. and Logg, A. (2017) Solving PDEs in Python: The FEniCS Tutorial I. Springer.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.