_{1}

^{*}

The resolvent helps solve a PDE defined on all of wave-number space,
_{}. Almost all electromagnetic scattering problems have been solved on the spatial side and use the spatial Green’s function approach. This work is motivated by solving an EM problem on the Fourier side in order to relate the resolvent and the Green’s function. Methods used include Matrix Theory, Fourier Transforms, and Green’s function. A closed form of the resolvent is derived for the electromagnetic Helmholtz reduced vector wave equation, with Dirichlet boundary conditions. The resolvent is then used to derive expressions for the solution of the EM wave equation and provide Sobolev estimates for the solution.

This paper solves Maxwell’s equations on the wave-number side. I apply tools from classical functional analysis to the Electromagnetic Helmholtz Reduced, Vector Wave Equation. These tools have been used successfully to understand solutions of second order, linear, ordinary differential equations with Dirichlet or Neumann boundary conditions, but to the author’s knowledge do not apply to more general settings.

I follow techniques described by Chew [

To the author’s knowledge, there has not been a significant amount of research on solving electromagnetic problems on the wave-number side. This could be because students in electromagnetics do not take courses in Functional Analysis or Applied Analysis.

Electromagnetics (EM) and Quantum Mechanics (QM) usually yield very singular solutions. All EM problems are scattering problems solved using singular

Green’s functions. The Operators in both EM ( ∂ ∂ x ) and QM ( − i ℏ ψ ′ ) problems

are singular. As an example, the solution of the homogeneous vector wave equation is unbounded. If we consider, E ^ ( k ) ∈ Sobolevspace ℍ 2 with a norm that is complete w.r.t. ‖ ⋅ ‖ , the solution is compact using weak derivatives.

We derive the vector Helmholtz wave equation assuming harmonic time dependence, E o e − i ω t + i k ⋅ x , from Maxwell’s curl equations, and the divergence for E in a source-free region. To introduce later on a Fourier transform for E ( x ) , consider the domain for k to be T 3 , mod R 3 . Then our problem is given by

B ( E ) ≐ { ℝ 3 → ℝ 3 ∪ ℂ , ( x , y , z ) ∈ ℝ 3 , E ∈ ℂ E → ( ∇ × H = i ω ϵ o E − ∇ × E = i ω μ o H − ∇ × ( ∇ × E ) = − ω 2 ϵ o μ o E = − k o 2 E ∇ ⋅ E = 0 − ∇ × ( ∇ × E ) = − ∇ ( ∇ ⋅ E ︸ = 0 ) + ∇ 2 E ∇ 2 E + k o 2 E = 0 ) (1)

x = ( x y z ) , k = ( k x k y k z ) , k o = ω c k ⋅ x = k x x + k y y + k z z ∇ 2 E + k o 2 E = ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 + k o 2 ) E = ( − k x 2 − k y 2 − k z 2 + k o 2 ) E (2)

k is a root of a homogeneous polynomial of degree 2, with constant coefficients; i.e., the last equation in (2) is the Dispersion equation,

k o 2 = k x 2 + k y 2 + k z 2 = | k | 2 (3)

From the last of Equation (2), we have

( | k | 2 − k o 2 ) ( E x E y E z ) = ( 0 0 0 ) (4)

From Equation (3), if k o 2 = k x 2 + k y 2 + k z 2 = | k | 2 , then E ≠ 0 then the homogeneous system in Equation (1) has a non-zero solution in the form of a spherical wave traveling in ℝ 3 and satisfying

∇ × E = i ω μ o H k ⊥ E = 0 (5)

To construct a solution for E ( r , t ) , go to the Fourier side where

E ^ ∥ , ⊥ ( k ) = ( 2π ) − 3 2 ∭ ℝ 3 E ∥ , ⊥ ( r ) e i ( k x x + k y y ) ± k z z d r , E ^ ∈ ℂ 3 , r ∈ ℝ 3 (6)

where E ^ is the symbol of the operator, and the plus sign in the exponential corresponds to waves traveling in the −z direction and the negative sign for waves traveling in the +z direction. The symbol of the operator will be constructed to satisfy

k ⋅ E ^ = 0 (7)

Parseval’s theorem on the Fourier side gives

‖ E ^ ( k ) ‖ 2 = ∑ r ∈ ℝ 3 ( r ⋅ r ) 2 | E ( r ) | 2 (8)

The symbol of the operator satisfies

k × ( k × E ^ ) = ( k ⋅ E ^ ) ︸ = 0 − ( k ⋅ k ) E ^ (9)

If k 2 = k o 2 , the solution for E ^ when H ^ is ⊥ to the plane of propagation containing E ^ , k , the so-called TM case, as in

E ^ ∥ ( k ) = e ^ ∥ ( k ) e i k ⋅ r r r = ( x y z ) (10)

Solve

∇ × ∇ × E − k o 2 E = i ω μ o J (11)

with Dirichlet boundary conditions on the green-black plane in

n × E = 0 n = e z (12)

and a radiation condition at z = ∞ , J = e n × H .

Two independent vectors span the red-yellow plane in

e ^ ∥ = e x k x k x 2 + k y 2 ∓ e z k x 2 + k y 2 k 2 − k x 2 − k y 2 (13)

and a basis vector perpendicular to the incident plane as

E ^ ⊥ = e ^ z × e x k x k x 2 + k y 2 = − e y k x k x 2 + k y 2 (14)

Tangential E ^ ∥ satisfies the boundary condition on the surface at z = 0 , i.e.,

e z × E ^ ∥ = 0 (15)

The perfectly conducting plane in

E ⊥ ( r ) = ( 2π ) − 3 2 ∭ ℝ 3 E ^ ⊥ ( k ) e − i ( k x x + k y y ) e ± k z z ︸ to satisfy radiation condition d k (16)

Equation (11) can be written in matrix form as

( k y 2 + k z 2 − k o 2 − k x k y − k x k z − k x k y k x 2 + k z 2 − k o 2 − k y k z − k x k z − k z k y k x 2 + k y 2 − k o 2 ) ︸ A ( E ^ x E ^ y E ^ z ) = i ω μ o ( J ^ x J ^ y J ^ z ) (17)

Equation (17) consists of 3 equations in 3 unknowns yielding a consistent system. On the Fourier side the EM problem for a region containing an object with induced currents, the solution reduces to solving a linear algebra problem. The inverse operator, A − 1 exists if and only if

A ( E ^ x E ^ y E ^ z ) = 0 ⇒ ( E ^ x E ^ y E ^ z ) = 0 (18)

which occurs when we are “sitting on an eigenvalue” i.e., λ ∈ σ p ( A ( k ) ) . The properties of the resolvent in the next section will demonstrate the different descriptions of the spectrum.

The inverse operator is given by

A − 1 = 1 k o 2 ( k o 2 − k x 2 − k y 2 − k z 2 ) ( k o 2 − k x 2 k x k y k x k z k x k y k o 2 − k y 2 k y k z k x k z k y k z k o 2 − k z 2 ) (19)

The inverse is singular when k 2 = k o 2 or k o = 0 . The eigenvalues are

λ 1 = − k o 2 λ 2 = λ 3 = 0 (20)

where Equation (20) has algebraic multiplicity of 2 since the geometric multiplicity is

dim N ( A − λ I ) = 1 (21)

An example of the variation of det ( A − λ I ) with λ is shown in

The operator in this study is A where

A : D ( A ( k ) ) → X (22)

a complex variable. With A we associate the operator

A λ = A − λ I (23)

where λ is a complex varible. A λ is called the Resolvent Operator (Kreyszig [

k_{i} | Value |
---|---|

k_{x} | 3/4 |

k_{y} | 0 |

k_{z} | 3/4 |

|k| | 3 2 2 |

k_{o} | 1 |

λ_{1} = −1 | λ 2 = λ 3 = 1 8 |

A λ x = y R λ = ( A − λ I ) − 1 (24)

The determinant of ( A − λ I ) is a polynomial of Degree 4, with coefficients in ℝ consisting of 78 monomials. After several attempts, I was able to factor the polynomial as

denominator = ( k y 2 + k z 2 − k o 2 − λ ) [ ( k z 2 + k x 2 − k o 2 − λ ) ( k x 2 + k y 2 − k o 2 − λ ) − k y 2 k z 2 ] + k x k y [ − k x k y ( k x 2 + k y 2 − k o 2 − λ ) − k x k y k z 2 ] − k x k z [ k y 2 k x k z + k x k z ( k x 2 + k z 2 − k o 2 − λ ) ] = k o 2 ( k o 2 − k 2 ) + λ 2 − λ ( k 2 − 2 k o 2 ) (25)

The denominator is a quadratic in λ which factors as

λ ( 1 ) , ( 2,3 ) = − ( k 2 − 2 k o 2 ) 2 ± k o 2 ( k o 2 − k 2 ) + ( k 4 − 4 k o 2 k 2 + 4 k o 4 ) 4 (26)

We now give the 9 elements in the matrix for R λ ;

R λ = 1 denominator ( k o 2 − k x 2 + λ k x k y k x k z k x k y k o 2 − k y 2 + λ k y k z k x k z k y k z k o 2 − k z 2 + λ ) (27)

In

In general, the spectrum [

( 1 ) λ is an eigenvalue and λ in σ p ( A ( k ) ) ( 2 ) A ( k ) is unbounded and λ ∈ σ c ( A ( k ) ) ( 3 ) A ( k ) exists and may be bounded or unbounded but the domain is not dense in X . (28)

In our problem we have 3 eigenvalues and an unbounded spectrum.

The Fourier transform of the resolvent is

R λ ( x ) = ( 2π ) − 3 2 ∫ ℝ 3 1 ( A − λ I ) e − i k ⋅ x d k (29)

The Fourier transform of the current, J ^ is

J ( x ) = ( 2π ) − 3 2 ∫ ℝ 3 J ^ ( k ) e − i k ⋅ x d k (30)

Using the Convolution Theorem the general solution to our problem on the wave number side for arbitrary currents in the inhomogeneous equation is:

E ( x ) ︸ ∈ ℂ 3 = ( 2π ) − 3 2 ∫ ℝ 3 R λ ( k ) ︸ ∈ ℂ 3 J ^ ( k ) ︸ ∈ ℂ 3 e − i k ⋅ x d k ︸ d k x d k y d k z (31)

which is integrated one row at a time. The integration in the k y direction in required for the geometry in

The integrand in Equation (31) without J the Green’s function,

G λ ( x − y ) = ( 2π ) − 3 2 ∫ k 3 R λ ( k )e − i k ⋅ ( x − y ) d k (32)

and the solution is

E ( x ) = ∫ R 3 G λ ( x − y ) J ( y ) d y (33)

For the geometry in

R λ ( k ) = 1 denominator ( k o 2 − k x 2 + λ 0 k x k z 0 k o 2 + λ 0 k x k z 0 k o 2 − k z 2 + λ ) (34)

and Equation (29) becomes

R λ ( x ) = ( 2π ) − 3 2 ∫ k 3 ( k o 2 − k x 2 + λ 0 k x k z 0 k o 2 + λ 0 k x k z 0 k o 2 − k z 2 + λ ) ( k o 2 − k 2 + λ ) [ k o 2 ( k o 2 − k 2 ) − λ 2 − λ ( k 2 − 2 k o 2 ) ] e − i k x d k (35)

The k z integration can be evaluated in closed form and the first row of Equation (35) becomes

R λ = − i e x e − i k x x − k z z k o 2 + λ { ∫ k 0 2 − k x 2 + λ arctan k z − k o 2 + k x 2 − λ d k x , 0, − 1 4 [ ( − k o 2 + k x 2 + k z 2 − λ ) ln ( − k o 2 + k x 2 + k z 2 − λ ) − 1 ] } (36)

with the k x integration of the form

∫ arctan a u d u (37)

The matrix A , is the projector onto U along V . If k ∈ ℂ n , then p x ∈ U and x − P x ∈ V , for an arbitrary x . Then

x = u ⊕ v , u ∈ U , v ∈ V . ‖ P x ‖ L 2 = | u Τ x |

Let G be a simple contour in ℂ \ { λ , ⋯ , λ } (the spectrum) which encloses the eigenvalues { λ 1 , ⋯ , λ r } , and form the integral where Γ encloses the first r eigenvalues, as

− 1 2 π i ∮ Γ 1 A − λ I d λ = − 1 2 π i ∮ ( 1 λ − λ 1 0 0 0 1 λ − λ 2 0 0 0 1 λ − λ 3 ) d λ = P

From Equation (20), λ 1 = − k o 2 , λ 2 , 3 = 0 . The residue at λ 1 yields,

P = − 1 2 π i ∮ Γ ( − k x 2 0 k x k z 0 0 0 k x k z 0 k z 2 ) 1 2 k o 4 − 2 k o 4 d λ = 0

and the residue at λ 2,3 yields,

denom = k o 4 − k o 2 k x 2 − k o 2 k z 2 + 2 k o 2 λ − λ x 2 − λ z 2 − λ 2 r 1 = 2 k o 2 − k x 2 − k z 2 + 2 λ lim λ → 0 d d λ ( λ 2 × 1 A − λ I )

= ( − 2 λ ( k o 2 − k x 2 + λ ) denom − λ 2 denom + λ 2 ( k o 2 − k x 2 + λ ) r 1 denom 2 0 2 λ k x k z denom − λ 2 k x k z r 1 denom 2 0 − 2 λ denom + λ 2 denom 0 2 λ k x k z denom − λ 2 k x k z r 1 denom 2 0 2 λ ( k o 2 − k z 2 + λ ) denom − λ 2 denom + λ 2 ( k o 2 − k z 2 + λ ) r 1 denom 2 ) = ( 0 0 0 0 0 0 0 0 0 )

and P is projected ⊥ to U .

Weak derivatives [

E ^ ( k ) ∈ ℍ 2 ( T 4 → ℝ 3 ) ︸ domain (38)

where ℍ 2 is a Sobolev space [

G ≐ { ℂ 3 → V = ℂ 4 E ^ → ( − k × ( k × E ^ ) − k o 2 E ^ = 0 i k ⋅ E ^ = 0 ) (39)

notation:

D u = e x ∂ u ∂ k x + e y ∂ u ∂ k y + e z ∂ u ∂ k z Hessian = D 2 u = ( ∂ 2 u ∂ k x 2 ∂ 2 u ∂ k x ∂ k y ∂ 2 u ∂ k x ∂ k z ∂ u 2 ∂ k y ∂ k x ∂ u 2 ∂ k y 2 ∂ u 2 ∂ k y ∂ k z ∂ 2 u ∂ k z ∂ k x ∂ u 2 ∂ k z ∂ k y ∂ 2 u ∂ k z 2 ) (40)

Defintion: weak derivatives of order n in ℍ 1 ( T ) are derived using integration by parts.

The derivatives in Equation (40) are evaluated as:

∫ L 2 ( T ) f ′ g d x = − ∫ L 2 ( T ) f g ′ d x .

If f ∈ ℍ 1 ( T ) , then the linear functional F : ℂ 1 ( T ) ⊂ L 2 ( T ) → ℂ , ϕ in C 1 ( T ) , is bounded. If f ∈ ℍ 1 ( T ) , then the weak derivative f ′ is f. Here T is the space [ 0,2 π ] , and ϕ is a test function of compact support, such that

∫ L 2 ( T ) f ′ ϕ d x = − ∫ L 2 ( T ) f ϕ ′ d x , ∀ ϕ in ℂ 1 ( T ) .

We want to show that E ^ ( k ) gains 2 derivatives in the Sobolev space L 2 .

Weak Derivatives in L_{2} 1.

E ^ ( k ) gains 2 derivatives in the Sobolev space L 2 , where E ^ ( k ) = R λ ( k ) J ^ ( k ) . (41)

#1 Proof: From the definition of a finite Sobolev space ℍ 2 ,

‖ E ^ ( k ) ‖ H 2 ( ℝ 3 ) = ‖ E ^ ( k ) ‖ L 2 ( ℝ 3 ) 2 + ‖ D E ^ ( k ) ‖ L 2 ( ℝ 3 ) 2 + ‖ D 2 E ^ ( k ) ‖ L 2 ( ℝ 3 ) 2 (42)

where C c ∞ ( U ) denotes the space of infinitely differentiable functions ϕ : ℝ 3 → ℝ with compact support in ℝ 3 . These functions, ϕ , are called test functions, and the derivatives D ( E ^ ) are called weak derivatives. The proof is independent of the choice of J ( k ) . Use the following

J ( k ) = ( e x e y e z ) (43)

Consider the first row of the matrix since the other 2 rows are similar, and depending on J may not come into play. Differentiating under the integral sign is permissible since the integral is uniformly convergent,

r 1 = − k o 2 + k x 2 + k z 2 − λ D k x E ^ ( k ) = − x e − i k x x − k z z k o 2 + λ ( ∫ k 0 2 − k x 2 + λ arctan k z − k o 2 + k x 2 − λ d k x 0 − 1 4 ( k o 2 + λ ) r 1 ( ln r 1 − 1 ) 0 ∫ arctan k z − k o 2 + k x 2 − λ − k o 2 + k x 2 − λ d k x 0 1 4 ( k o 2 + λ ) r 1 ( ln r 1 − 1 ) 0 ∫ [ k z k o 2 + λ − k x 2 arctan k z − k o 2 + k x 2 − λ − k o 2 + k x 2 − λ ] d k x ) D k x 2 E ^ ( k ) = i x e − i k x x − k z z k o 2 + λ { ∫ k 0 2 − k x 2 + λ arctan k z − k o 2 + k x 2 − λ d k x − 1 4 r 1 ( ln r 1 − 1 ) − x e − i k x x − k z z k o 2 + λ [ k o 2 − k x 2 + λ arctan ( k z − k o 2 + k x 2 − λ ) ] ,0, − 1 4 ( k o 2 + λ ) ( k x ln r 1 ) } (44)

Equation (43) is substituted into 41 and the L 2 norms are then computed. The norms are locally integrable. The calculation of the L 2 norms requires additional work.

The resolvent provides insight into the spectrum of the operator [

I have shown that solving the vector Helmholtz equation on the wave-number side yields a relatively simple approach to the spectra σ p ( A ( k ) ) and σ c ( A ( k ) ) . We demonstrated this approach by example and simple mathematical expressions for the resolvent. The Sobolev approach using a finite series expansion for E ^ ( k ) is complex mathematically and requires as much work as the Green’s function approach for x ∈ ℝ 3 . Also, the singularities are present in the Sobolev approach as in the Green’s function. EM and QM problems yield highly singular solutions. This is just the nature of these disciplines.

Future research should examine numerical evaluations of the L 2 ( R 3 ) norms using any ϕ ∈ C c ∞ ( R 3 ) , used in the Sololev estimates.

An example testing function is

ϕ ( x ) = { e 1 x 2 − 1 , | x | < 1 0, | x | ≥ 1

Also, future work could involve the study of the resolvent as a function of the given wave-number k o .

The author declares no conflicts of interest regarding the publication of this paper.

Ott, R. (2019) Reduced Vector Helmholtz Wave Equation Analysis on the Wave-Number Side. Journal of Electromagnetic Analysis and Applications, 11, 161-172. https://doi.org/10.4236/jemaa.2019.119011