New Formulas and Results for 3-Dimensional Vector Fields

New formulas are derived for once-differentiable 3-dimensional fields, using the operator  . This new operator has a property similar to that of the Laplacian operator; however, unlike the Laplacian operator, the new operator requires only once-differentiability. A simpler formula is derived for the classical Helmholtz decomposition. Orthogonality of the solenoidal and irrotational parts of a vector field, the uniqueness of the familiar inverse-square laws, and the existence of solution of a system of first-order PDEs in 3 dimensions are proved. New proofs are given for the Helmholtz Decomposition Theorem and the Divergence theorem. The proofs use the relations between the rectangular-Cartesian and spherical-polar coordinate sys-tems. Finally, an application is made to the study of Maxwell’s equations.


Introduction
In this article, the following new formula is derived, where 3 : f R R → is a continuously differentiable function which vanishes at infinity: where the derivatives are evaluated at ( ) Using these results, the following formula is proved (Theorem 9 below) for a 3-dimensional continuously differentiable vector field, i.e., a continuously differentiable function 3 3 : , , x y z . This formula requires only one integration.
A formula similar to (2) appears in Stokes [1] and Blumenthal [2]; Blumenthal has " 1 grad r       " in place of our " 2 r r ". Stokes (using modern notation) is looking for a vector field F such that is a specified function which vanishes outside a "finite portion of space". He next seeks a vector field G, say, which has zero divergence and has, as its curl, a given function whose divergence is zero. The sum F G + then has the specified divergence and curl.
Blumenthal assumes that the first partial derivatives of the vector field also vanish at infinity; he then proves uniqueness up to an additive constant function.
We also prove an extension of (2) for the "bounded" case (Theorem 12).
Another related formula [7] [8] [9] [10] [11], often called the "Grad-Curl Theorem" [9], is: Its proof requires the vector field to be twice-differentiable in [7] [8] [9] but only once-differentiable in [10] [11], and the stronger "at infinity" condition to hold in all these references. All three formulas above immediately prove (a part of) Helmholtz's Theorem which state that a 3-dimensional vector field is uniquely determined by its divergence, Denoting the two "parts" of F in our Formula (2) above by F ∇⋅ and F ∇× , (we will often use dV to denote volume integration), namely, F ∇⋅ is usually called the irrotational or lamellar part of F, and F ∇× the solenoidal part of F. We also show (Theorem 17) that these two parts are orthogonal, i.e., ( ) ( ) so that the decomposition implied by our Formula (2) is a "complementary-orthogonal" decomposition. In fact, we show a stronger result, namely, the irrotational part of one field is orthogonal to the solenoidal part of any field. This result may be related to Tellegen's Theorem of Electrical Network Theory. Further, we show that the "corresponding" parts in the three Formulas (2), (3), (4), are all equal; thus: This follows from three new Formulas (Theorems 13, 14, 15 below) regarding integrands with ∇ , ∇ ⋅ , and ∇ × operators: [ ] ( ) 3 (4) is usually written more compactly: where φ , called the scalar potential associated with F, and A, called the vector potential associated with F, are given by: We derive interesting alternative expressions for the potentials which do not involve any ∇ , that is to say, differentiation operation, namely: These expressions seem to be new.
Of course, in the classical formulas above, the recovery of F is from its divergence and curl as sources, but not directly; it involves potentials as intermedia-ries, whereas in our formula, the recovery is more direct.
The proofs of Formula (4) in [7] [8] [9] use the Laplacian operator 2 ∇ , namely: and its property: where f is a scalar function and the derivatives are evaluated at ( ) , , x y z . This property requires twice-differentiability. The proofs of (4) in [10] [11] use a Green's function solution of the Poisson equation. In contrast, our proofs use the differential operator: and its property (1) above which does not seem to have been noticed before.
Our approach exploits some nice properties of the spherical-polar coordinate system in its relation with the rectangular coordinate system, resulting in some simple integrations (incidentally, Gauss [12] exploited these in his Memoir on the "inverse square force law" to prove that the potential function is twicedifferentiable). Our derivations do not use the Dirac δ -function, "singularity functions" like 1 r   ∇     and 2 1 r   ∇     , and " δ -function identities" (derivations that use the δ -function are not necessarily shorter. As an example, see [13]). We also do not use the theory of distributions.
Interestingly, results similar to (1) hold for an operator x x in two variables, and even one in four variables, namely, We prove an extension of Theorem 1 (Theorem 6) for bounded regions, involving volume and surface integrals, with a new, more natural definition of a region bounded by a surface, appropriate for spherical-polar coordinates. Application to 3-dimensional vector fields begins with Theorem 9 which gives our Formula (2). The technique used leads immediately to an extension (Theorem 12) of Theorem 9. It appears to be a better alternative to the usual formula for vector fields over bounded regions. We prove some new results (Theorems 13, 14, 15) on removing a derivative occurring inside an integral. Using them, we prove a property of irrotational and solenoidal parts (Theorem 16) and then their orthogonality (Theorem 17). An existence result (Theorem 18) is then easily proved regarding a simple system of first-order partial differential equations in 3 independent variables. This result is believed to be new. Helmholtz's Theorem is then proved (Theorems 19 and 20) in a new form and with weaker assumptions. We give a proof of the Divergence Theorem (Theorem 21) with our new definition of a closed surface. Finally, we make an application to Maxwell's equations.

Preliminaries
We start with the usual defining relations between the rectangular coordinates ( ) , , x y z and the spherical-polar coordinates ( ) , , f r θ φ is used to indicate the two different meanings (we could also write ( f T  ) for f , where "  " denotes the composition of two functions). We will say that the function f is associated with the function f.
We note the following relations, between the derivatives corresponding to the coordinate variables of the two systems, involving the Jacobian matrix JT corresponding to the transformation T: ˆs in cos sin sin coŝ cos cos cos sin sin , sin sin sin cos 0 These latter relations will be used in our proofs below. They could be obtained "directly" but the matrix inversion route is easier. We have, in particular: There is no such nice relation for f θ ∂ ∂ .
Remark 2: If the action of a time-dependent source field ( ) , , , f x y z t is delayed or advanced in time, the associated delayed/advanced function ( ) , , , f r t θ φ is defined as follows: sin cos , sin sin , cos , , where v is a "speed" parameter; 0 v > for delayed action, and 0 v < for advanced action. We then have: and so, in Equation (2.2) above, we have, in the column on the right-hand side, All the derivatives are evaluated at a retarded time argument. This simple modification leads to simple changes in the results below.
Note that . We could, of course, introduce a modified operator: .

The Basic Result
We are now ready to prove a special case of the basic result (1) mentioned in the In-  (16) where the numerator of the integrand is to be evaluated at ( ) , , x y z . The integrand is undefined at ( ) 0,0,0 .
Remark 3: The integral is, of course, an "improper" integral, and is to be understood as the limit of a "definite" integral extended over the compact set and R → ∞ . We need the ε since the integrand is not well-defined at ( ) 0,0,0 .
Proof: The "change of variables formula for multidimensional integrals" [14] tells us that for a function f: . Now , so that we have: In the above sequence of calculations, we have changed a 3-dimensional integral into a succession of integrals and changed the order of integration which is permissible because all the intervals of integration are finite "intervals". We will skip many intermediate steps in later derivations.
Remark 4: In the computations above we see the advantages of the spherical-polar coordinates over the rectangular. The multiple integral is reduced to iterated integrals. The improperness of the integral can be handled with limits on only one variable, namely, r. One can see also how the number π makes its appearance in the formula. Of course, unlike , , x y z , there is no symmetry between , , r θ φ . viously, the Laplacian result assumes that f is twice-differentiable, whereas our Theorem 1 requires f to be only once-differentiable.
Remark 6: Our basic result can be interpreted as saying that the differential operator ∇ or gradient has an inverse which is an integral operator-a result similar to the "Fundamental Theorem of the Calculus of One Variable". It is also a little surprising that the integral operator involves integration over all space, whereas one can obtain the difference between the values of the function f at two points as a line integral of the gradient: It would be interesting to relate the volume and line integrals.
We have a Green-like identity, which we state as a Corollary, involving two functions f and h: With similar assumptions about f and h, We simply use the distributive property of our operator, namely: Remark 7: We could put a multiplier ( ) r ψ , say, with the integrand to enable us to treat modifications of the inverse square law of force such as are involved in the Yukawa potential. We then have the following theorem which can be proved along the lines of the proof of Theorem 1.
Theorem 2:  Interestingly, we have results similar to Theorem 1 in one, two, and even four independent variables. The result for one variable is easy to see, namely: Theorem 1 (R): We next have a similar result in two variables, namely: The usual rectangular-to-polar transformation has, for the Jacobian determinant JT, the value r, and so we need only r 2 in the denominator of the integrand. The rest of the calculations proceed as in the proof of Theorem 1. Note that we have a multiplier 2 − π for ( ) 0,0 f . Perhaps this can be used to derive new results for the plane. To prove a similar result for four variables, we need an unusual coordinate transformation which, however, has the desired features of the usual rectangular-to-spherical transformation in two and three variables. We note the following simple fact: which suggests the transformation: cos cos , cos sin , sin cos , sin sin , x r x r x r x r θ ψ θ ψ θ ψ θ ψ We then have: Proof: Note that we have 4 r in the denominator and the multiplier of ( ) . We use the fact that: Perhaps this can be used to derive new results for space-time.

Extensions of the Basic Result
Next, using the slightly modified expression for f r ∂ ∂ noted above for delayed/advanced action, we immediately have: Theorem 3 (Theorem 1 with delayed/advanced action): Thus, we can have a recovery of a function through partial derivatives evaluated at a retarded time argument, if we wish. We can easily prove a generalization of Theorem 1 to obtain two slightly different formulas for the value of f at points other than the origin. , , a b c R ∈ , then: where the partial derivatives are evaluated at ( ) , , x a y b z c + + + , and also: where the partial derivatives are evaluated at ( ) , , x y z .
Note that in (18) above, a slightly different operator dependent on ( ) , appears. The form of the integral in (23) is convenient for interpretation and computation, whereas the form in (17) is useful for derivations where the integral needs to be differentiated with respect to , , a b c which appear as parameters.
Proof: Define a related function f by: Applying Theorem 1 to f , we get: and the partial derivatives are evaluated at ( ) keeping in view the definition of f . Similar relations hold for the other two partial derivatives.
Remark 8: The function f is a translation of the function f . In his calculation of the derivative of a potential function, Gauss used this idea to show that the potential of a "mass distribution" at any point has the same value as the potential at the origin-or any chosen reference point-of a suitably translated distribution. We could prove the result above by using a translation of the rectangular coordinates, i.e., by changing the origin. Another approach would be by using a non-standard spherical-polar to rectangular coordinate transformation that we will use later on. The transformation, denoted by , ,

A Basic Result for Related Operators
We state a result for the operator x y y x  that follows from the fact: Result: Under the conditions of Theorem 1, Proof: The integral is equal to: One can guess two more results like the one above, namely: To prove these, we could, once again, talk about a change of variables, this time a permutation of the variables, from , , x y z to, say, , , z y x , i.e., new va- , a related function f ′ , use the result proved above to get: and finally, appeal to the "dummy variables" idea to get: A better approach would be to use yet another spherical-polar to rectangular coordinate transformation, namely: cos , sin sin , sin cos , and then proceed as in the proof above to derive: Indeed, the usual definitions of θ and φ come from spherical astronomy, where one talks about "declination" and "azimuth" angles, the zenith being in the z-direction.
We can prove the same result using the standard spherical-polar coordinates as follows.
We collect these 3 results together as a Theorem: Theorem 5 (basic result for related operators): Under the assumption that f has continuous partial derivatives, Remark 10: These three results show that the three first-order partial derivatives of f are not totally "independent" of one another, even though second-order partial derivatives may not exist. Of course, we do assume that the first-order derivatives are continuous. Also, like Theorem 1, we will have two versions of these results. They are crucial in our derivation of the new alternative to Poisson's Formula. They do not seem to have been noted before.
We can combine the 3 equations into a single vector equation: In contrast, we could write our basic result as:

Basic Results for Bounded and Unbounded Regions
Theorem 1 above involved a volume integral extended over whole space. We Let S be a positive real-valued function, bounded away from 0, of two variables θ and φ , with 0 θ ≤ ≤ π and 0 2 φ ≤ ≤ π , i.e., ( ) , S θ φ ε > for some 0 ε > ; we mean by the surface S the set: in spherical coordinates, and the set: in rectangular coordinates.
By the region V bounded by the surface S we mean the set: in spherical coordinates, and the set: sin cos , sin sin , cos : 0 , , 0 , 0 2 in rectangular coordinates. Note that the origin is an interior point of V and that each ray from the origin meets the surface in only one point.
where we denote by Proof: We have: The theorem can be interpreted as follows. Given a region bounded by a surface, the value of a once-differentiable scalar function (field) at an interior point is uniquely determined by the values of the function on the surface and the values of its partial derivatives in the region. It thus gives a "formula" for determining the value at an interior point. Note that the 2-dimensional integral above is not the usual surface integral. Further, instead of the whole region V bounded by the surface S, we could consider a cone with vertex at the origin and terminating on the surface and get a partial surface integral. We could then obtain an expression for the value of the function at a point outside the surface.
Remark 12: With an appropriate definition of "vector element of surface area" dS  , we can write the 2-dimensional integral on the right hand side above as a "surface integral": The vector element dS  is also written as dSn where dS is the "magnitude" of the surface element and n is the unit normal vector. However, the "surface integral" is harder to visualize and calculate.
Proof: S is not a spherical surface in general. So we have to define what we could mean by the magnitude dS of the surface element and the unit normal n to it. It turns out to be easier to define the vector surface element dS  (it will be used later in our proof of the Divergence Theorem with our definition of surface).
Consider a "quadrilateral" ABCD, with the 4 corners determined by 4 pairs of sin cos , cos cos , sin sin , cos sin , cos sin d .
We then define the vector surface element dS  as: dS AD AB = ×    which turns out to be, in rectangular coordinates: where, writing S in place of ( ) , S θ φ for easy readability: 2 2 sin sin cos cos sin cos , cos sin cos sin sin sin , Finally, considering the possibility that S is a surface of discontinuity for f, we split the region of integration for r into two parts, and with this understanding for the volume integral, we have: where we denote by

A New Formula for Unbounded Case
We now turn to 3-dimensional vector fields, i.e., functions 3 3 : F R R → . We can immediately extend the results for scalar fields to vector fields by consider-ing a 3-dimensional vector-valued function F as a set of 3 scalar valued functions , , x y z F F F and use three recovery formulas, using Theorem 4: This recovery involves nine partial derivatives but only three combinations of these appear in each recovery formula. However, using Theorem 5, putting a multiplier 1 4 − π , we have: Adding" the left-hand sides of these two equations to the right-hand side of We can obtain similar expressions for We recognize that is the divergence of F, i.e., F ∇ ⋅ . We also see that Consideration of the other components of F and using the notation of Theorem 4, we obtain, if the weaker regularity condition This theorem is our new alternative to Jefimenko's formula stated in the Introduction, namely: Note, however, that with retarded action, not only the divergence and the curl but also the time-derivative of the field appear as "sources".

New Formula for a Bounded Region
To prove our new formula above, we applied Theorem 4 and Theorem 5 to the components of F and then combined the results appropriately. To obtain a new formula for a bounded region, we will need an extension of Theorem 5 above, which we now state and prove.
We prove only the first of the three equations above. Indeed: But here we cannot integrate with respect to φ first because the upper limit on r depends on φ (and θ ). So we use a "trick". We will use Leibniz's rule for derivative of an integral with respect to a parameter-with a twist. Leibniz's Rule says: Here, q is a parameter. This is moving differentiation into the inside of an integral. We rearrange terms to get: This is moving differentiation to the outside. Using it, we get: Substituting, we get: Equipped with the result above, we now get the following result in a way similar to that for Theorem 8.
Compare the "surface integral" terms above with the formula given by Zhou [10]: His n is our n , and our dS  is dSn .

Uniqueness of the Inverse Square Laws
We can interpret our result as also saying that if ( ) are regarded as sources for the field F, and then no matter how the sources "actually" work, they can be regarded as working through an inverse square law, or influence function, or Green's function. The inverse square law is, in this sense, ubiquitous. There are, actually, two inverse-square laws. The scalar source ( ) The uniqueness is with respect to the question whether the inverse square " r  -dependent" radial influence function for a scalar source (Coulomb's Law of Electrostatics) is the only " r  -dependent" one with the property that the divergence of the generated vector field equals the generating scalar field and the curl of the generated field is zero. Similarly, for the vector source and transverse action function (Biot-Savart Law of Magnetic Effect of Stationary Currents). Thus,  is a vector influence function for any arbitrary scalar field ρ to produce a vector field F such that ( ) We can immediately see that this is the case.
Indeed, we have, by our result, also . V r r r = −     Remark 13: It is curious that the defining relation above for the vector field generated by a scalar field through a r  -dependent influence function has not been noticed to be a (space) convolution integral by workers in Electromagnetism-a fact which would probably be easily seen more readily by workers in Linear System Theory. The influence function there is called the impulse response and the convolution is in time.
The uniqueness result for the vector source with transverse action can be easily seen to be true.
Remark 14: It is indeed surprising that whereas the expressions for , , x y z F F F separately involved 3 partial derivatives multiplied by , , x y z , the vector F, i.e., the 3 scalars put together, has an expression that involves 4 different "disjoint" combinations of the partial derivatives multiplied by , , x y z , and these happen to be the combinations occurring in This led us, in turn, to our Formula (1)  1

Removing a Derivative Occurring inside an Integral
Our main result in Section 3 can be regarded as enabling us to "integrate out" completely differential expressions occurring inside an integral. We now state a number of results which only remove a derivative occurring inside an integral, without succeeding in integrating out completely. We first state three lemmas.
Here, g denotes the function associated with f.
For the first term, we have: where we have used integration by parts with respect to r. For the second term we have: Proof: To prove the first result, we do not start by moving the differentiation required by ∇ ⋅ into the inside of the integral because that would require twicedifferentiability of F. Instead, we "undo" the differentiation inside the integral by using the above three results. Thus, we start with the i-component of the expression within the parentheses, namely: This becomes, on applying the three results above: [ ]  this is:   2   3  2  3  5  0   5   1  1  3  1  1 3  d  d  3  4  4 1 3 d . 4    4  3  4  3  3  d  3  d  3  3   4  3  3  3  d  d  3   4 .
Similar-tedious-calculations show that the other results are true.
We next prove the orthogonality of the irrotational part of any field with the solenoidal part of any field. To do this, we first prove the following two results: We sketch proofs of these results-and one more-under the assumption that f and A are differentiable. ∫∫ . Corollary 2. 1) If a vector field is orthogonal to all solenoidal fields, it must be irrotational; 2) If a vector field is orthogonal to all irrotational fields, it must be solenoidal.
Gui and Dou [11] mention the orthogonality only in passing (Proposition 4, p. 288) without citing any references.

Moving Differentiation into inside of an Integral: A New Proof of Helmholtz's Theorem
We now illustrate how the operation of differentiation with respect to a rectangular coordinate applied to an integral is equivalent to an integral in spherical-polar coordinates involving an integrand which has a derivative. Commonly, this is known as interchanging the order of integration and differentiation, or carrying out a differentiation into the inside of an integral. In textbooks on Electromagnetism this procedure is used to move the "del" operators into the inside of an integral for later manipulations. We will use this procedure to prove Helmholtz's Theorem. We will illustrate this first with an existence theorem for a firstorder PDE system. The existence theorem appears to be new.

An Existence Theorem for a PDE System
This is a "converse" of Theorem 4 which amounts to a solution of the simplest 3-variable partial differential equation problem: find a function ( ) atives, whereas our result says that it is determined by its first-order derivatives.
In both cases, the solution can be obtained by a "simple" volume integration.

Helmholtz Theorem
We first state and prove a theorem, which gives one aspect of Helmholtz's Theorem, requiring weaker assumptions. This aspect of Helmholtz's Theorem is an existence theorem which shows the existence of a vector field having a prescribed divergence and curl, subject to the condition that the prescribed curl has zero divergence. The other aspect is a decomposition theorem which states that any continuously differentiable vector field can be decomposed into two "components", one of which is the gradient of a scalar field and the other is the curl of a vector field, and that these "generating" fields can be obtained from the original vector field. Zhou [10] and Gui and Dou [11] have a good discussion of various proofs of the Helmholtz Theorem given in many references.
Theorem 19 (existence of field with specified divergence and curl). If f is a given continuously differentiable scalar function and A is a given continuously differentiable vector function such that 0 A ∇ ⋅ = , then the function W defined by: and Thus, there exists a vector field with a specified divergence value and a specified curl value, provided the curl value has zero divergence.
Proof: For the components of W we have: )   3   3   1  1  , ,  ,  ,  ,  ,  4 , , ,  We now state: Theorem 20 (Helmholtz theorem, existence of decomposition). If F is a given continuously differentiable vector function, there exists a scalar function V and a vector function A such that: , where V and A are given by: has zero curl, and A ∇ × has zero divergence. Thus, any arbitrary vector field can be decomposed into a zero-curl part and a zero-divergence part.
Proof: This follows from the Results 1 and 2 of the previous section and our formula: The function V is usually called the scalar potential function and A the vector potential function generating F. The above two expressions are the ones commonly given. But using our Theorems 14 and 15, the same functions are also given by the following expressions which do not involve any ∇ , that is to say, differentiation operation. Interestingly, there is a close similarity between these expressions and the following one:

( ) ( )
Such a "decomposition" or "representation" of any arbitrary vector in terms of another arbitrary vector follows immediately from the "vector algebra" identity: and is used in Clifford geometric algebra. Perhaps, there is some deeper connection here! Corollary 3 (Poisson's theorem of electrostatics). V ρ ∇ = where V denotes the potential function corresponding to the source density function ρ . The proofs given in most textbooks use the Divergence Theorem and "suffer" from the defect that they assume that the potential function is twice-differentiable. Gauss's proof was probably the first to show that the potential function is twice-differentiable, though under the assumption that the density function ρ is once-differentiable. Our proof also makes this assumption. Interestingly, Kellogg [15] proves the result without making this assumption, but assumes what is known as a Hölder condition.

A New Proof of the Divergence Theorem
In this section, we prove the Divergence Theorem, employing spherical-polar coordinates, which will further illustrate the "power" of these coordinates used along with rectangular coordinates. We need to prove it because we have a new definition of a surface.
Theorem 21 (divergence theorem). If a region V is enclosed by a surface S and 3 3 : Proof: As in the classical proofs of the Theorem, we prove the equality of the corresponding three terms on the two sides of the desired equation. In the rectangular coordinates proof, it is usually required that the surface is such that it is "raised" on its projections on each of the three coordinate planes. We do not require this because we have used a different definition of a surface.
We first consider the integral of the part x F x ∂ ∂ of the divergence and the corresponding part on the right hand side. We use the expression for the surface element dS  as calculated earlier. We denote the functions associated with x F by x G . We will show that:  We can then write the solution of the first two Maxwell equations using our recovery formula and Jefimenko's B as: the wave equation.

Some Remarks
First a remark on mutuality of forces (Newton's Third Law). It is usually pointed out that the Biot-Savart-actually, Grassmann-Law for force exerted by one "current element" on another does not satisfy Newton's Third Law of Motion.
Jefimenko's formula for E shows that moving charges ( Finally, some closing remarks on "Fields versus Action-at-a-distance", nature of "sources", and "causality", are in order. The "cause" of the fields E and B are ρ and J, subject to the Continuity Equation, in the sense that they determine the fields uniquely-which really means that we can calculate them. But they cannot be chosen at will because they are not only subject to practical limitations, but also limited by the "fact" that they change under the action of the fields that they themselves produce collectively. Thus, we assume that J and E are related depending on the medium, whether a conductor or a dielectric. The fields act locally, as evidenced by the Lorentz formula, but they are not caused or produced locally. Thus, even in the electrostatic case, although the equation

Conclusion
In the present contribution, a number of new formulas for 3-dimensional vector fields have been derived and new proofs given for some classical results, such as the Helmholtz Theorem and the Divergence Theorem. A new definition of a surface is given and used to derive a new result. Orthogonality of the irrotational + solenoidal decomposition is proved. As an application, a new approach to Maxwell's equations is suggested.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.