Gradient density estimation in arbitrary finite dimensions using the method of stationary phase

We prove that the density function of the gradient of a sufficiently smooth function $S : \Omega \subset \mathbb{R}^d \rightarrow \mathbb{R}$, obtained via a random variable transformation of a uniformly distributed random variable, is increasingly closely approximated by the normalized power spectrum of $\phi=\exp\left(\frac{iS}{\tau}\right)$ as the free parameter $\tau \rightarrow 0$. The result is shown using the stationary phase approximation and standard integration techniques and requires proper ordering of limits. We highlight a relationship with the well-known characteristic function approach to density estimation, and detail why our result is distinct from this approach.

1. Introduction. Density estimation methods provide a faithful estimate of a non-observable probability density function based on a given collection of observed data [14,15,18,3]. The observed data are treated as random samples obtained from a large population which is assumed to be distributed according to the underlying density function. The aim of our current work is to show that the joint density function of the gradient of a sufficiently smooth function S (density function of ∇S) can be obtained from the normalized power spectrum of φ = exp iS τ as the free parameter τ tends to zero. The proof of this relationship relies on the higher order stationary phase approximation [11,12,13,22,23]. The joint density function of the gradient vector field is usually obtained via a random variable transformation of a uniformly distributed random variable X over the compact domain Ω ⊂ R d using ∇S as the transformation function. In other words, if we define a random variable Y = ∇S(X) where the random variable X has a uniform distribution on the Ω (X ∼ U N I(Ω)), the density function of Y represents the joint density function of the gradient of S.
In computer vision parlance-a popular application area for density estimationthese gradient density functions are popularly known as the histogram of oriented gradients (HOG) and are primarily employed for human and object detection [6,24]. The approaches developed in [19,1] demonstrate an application of HOG-in combination with support vector machines [20]-for detecting pedestrians from infrared images. In a recent article [10], an adaption of the HOG descriptor called the Gradient Field HOG (GF-HOG) is used for sketch based image retrieval. In these systems, the image intensity is treated as a function S(X) over a 2D domain, and the distribution of intensity gradients or edge directions is used as the feature descriptor to characterize the object appearance or shape within an image.
In our earlier effort [8], we primarily focused on exploiting the stationary phase approximation to obtain gradient densities of Euclidean distance functions (R) in two dimensions. As the gradient norm of R is identically equal to 1 everywhere, the density of the gradient is one-dimensional and defined over the space of orientations.
The key point to be noted here is that the dimensionality of the gradient density (one) is one less than the dimensionality of the space (two) and the constancy of the gradient magnitude of R causes its Hessian to vanish almost everywhere. In Lemma 2.3 below, we see that the Hessian is deeply connected to the density function of the gradient. The degeneracy of the Hessian precluded us from directly employing the stationary phase method and hence techniques like symmetry-breaking had to be used to circumvent the vanishing Hessian problem. The reader may refer to [8] for a more detailed explanation. In contrast to our previous work, we regard our current effort as a generalization of the gradient density estimation result, now established for arbitrary smooth functions in arbitrary finite dimensions.

Main Contribution.
We introduce a new approach for computing the density of Y , where we express the given function S as the phase of a wave function φ, specifically φ(x) = exp iS(x) τ for small values of τ , and then consider the normalized power spectrum-magnitude squared of the Fourier transform-of φ [4]. We show that the computation of the joint density function of Y = ∇S may be approximated by the power spectrum of φ with the approximation becoming increasingly tight point-wise as τ → 0. Using the stationary phase approximation -a well known technique in asymptotic analysis [22]-we show that in the limiting case as τ → 0, the power spectrum of φ converges to the density of Y and hence can serve as its density estimator at small, non-zero values of τ . In other words, if P (u) denotes the density of Y and if P τ (u) corresponds to the power spectrum of φ at a given value of τ , Theorem 3.2 constitutes the following relation, where N η (u 0 ) is a small neighborhood around u 0 . We call our approach the wave function method for computing the probability density function and henceforth will refer to it as such. We would like to emphasize that our work is fundamentally different from estimating the gradient of a density function [7] and should not be semantically confused with it.
1.2. Significance. As mentioned before, the main objective of our current work is to generalize our effort in [8] and demonstrate the fact that the wave function method for obtaining densities can be extended to arbitrary functions in arbitrary finite dimensions. However, one might broach a legitimate question, namely "What is the primary advantage of this approach over other simpler, effective and traditional techniques like histograms which can compute the HOG say by mildly smoothing the image, computing the gradient via (for example) finite differences and then binning the resulting gradients?". The benefits are three fold: • One of the foremost advantages of our wave function approach is that it recovers the joint gradient density function of S without explicitly computing its gradient. Since the stationary points capture gradient information and map them into the corresponding frequency bins, we can directly work with S without the need to compute its derivatives.
• The significance of our work is highlighted when we deal with the more practical finite sample-set setting wherein the gradient density is estimated from a finite, discrete set of samples of S rather than assuming the availability of the complete description of S on Ω. Given the N samples of S on Ω, it is customary to know the rate of convergence of a proposed density estimation method as N → ∞. In [9] we prove that in one dimension, our wave function method converges point-wise at the rate of O(1/N ) to the true density when τ ∝ 1/N . For histograms and the kernel density estimators [14,15], the convergence rates are established for the integrated mean squared error (IMSE) defined as the expected value (with respect to samples of size N ) of the square of the ℓ 2 error between the true and the computed probability densities and are shown to be O N − 2 3 [17,5] and O N − 4 5 [21] respectively. Having laid the foundation in this work, we plan to invest our future efforts in pursuit of similar convergence estimates in arbitrary finite dimensions.
• Furthermore, obtaining the gradient density using our framework in the finite N sample setting is simple, efficient, and computable in O(N log N ) time as elucidated in the last paragraph of Section 4.

Existence of Joint Densities of Smooth Function Gradients.
We begin with a compact measurable subset Ω of R d on which we consider a smooth function S : Ω → R. We assume that the boundary of Ω is smooth and the function S is wellbehaved on the boundary as elucidated in Appendix B. Let H x denote the Hessian of S at a location x ∈ Ω and let det (H x ) denote its determinant. The signature of the Hessian of S at x, defined as the difference between the number of positive and negative eigenvalues of H x , is represented by σ x . In order to exactly determine the set of locations where the joint density of the gradient of S exists, consider the following three sets: We employ a number of useful lemma, stated here and proved in Appendix A.
As we see from Lemma 2.1 above, for a given u / ∈ C, there is only a finite collection of x ∈ Ω that maps to u under the function ∇S. The inverse map ∇S (−1) (u) which identifies the set of x ∈ Ω that maps to u under ∇S is ill-defined as a function as it is a one to many mapping. The objective of the following lemma (Lemma 2.2) is to define appropriate neighborhoods such that the inverse function ∇S (−1) -required in the proof of our main Theorem 3.2-when restricted to those neighborhoods is well-defined.
Lemma 2.2. [Neighborhood Lemma] For every u 0 / ∈ C, there exists a closed neighborhood N η (u 0 ) around u 0 such that N η (u 0 ) ∩ C is empty. Furthermore, if |A u0 | > 0, N η (u 0 ) can be chosen such that we can find a closed neighborhood N η (x) around each x ∈ A u0 satisfying the following conditions:

The inverse function ∇S
[Density Lemma] Given X ∼ U N I(Ω), the probability density of Y = ∇S(X) on R d − C is given by where x k ∈ A u , ∀k ∈ {1, 2, . . . , N (u)} and µ is the Lebesgue measure.
From Lemma 2.3, it is clear that the existence of the density function P at a location u ∈ R d necessitates a non-vanishing Hessian matrix (det(H) = 0) ∀x ∈ A u . Since we are interested in the case where the density exists almost everywhere on R d , we impose the constraint that the set B in (2.2), comprising all points where the Hessian vanishes, has a Lebesgue measure zero. It follows that µ(C) = 0. Furthermore, the requirement on the smoothness of S (S ∈ C ∞ (Ω)) can be relaxed to functions S in C d 2 +1 (Ω) where d is the dimensionality of Ω as we will see in Section 3.2.2.

Equivalence of the Densities of Gradients and the Power Spectrum.
Define the function F τ : for τ > 0. F τ is very similar to the Fourier transform of the function exp iS(x) τ . The normalizing factor in F τ comes from the following lemma (Lemma 3.1) whose proof is given in Appendix A.
Lemma 3.1. [Integral Lemma ] F τ ∈ L 2 (R d ) and F τ 2 = 1. The power spectrum is defined as [4] P τ (u) ≡ F τ (u)F τ (u). (3.2) Note that P τ ≥ 0. From Lemma (3.1), we see that´P τ (u)du = 1. Our fundamental contribution lies in interpreting P τ (u) as a density function and showing its equivalence to the density function P (u) defined in (2.4). Formally stated: where N α (u 0 ) is a ball around u 0 of radius α. Before embarking on the proof, we would like to emphasize that the ordering of the limits and the integral as given in the theorem statement is crucial and cannot be arbitrarily interchanged. To press this point home, we show below that after solving for P τ , the lim τ →0 P τ does not exist. Hence, the order of the integral followed by the limit τ → 0 cannot be interchanged. Furthermore, when we swap the limits of α and τ , we get which also does not exist. Hence, the theorem statement is valid only for the specified sequence of limits and the integral.

Brief exposition of the result.
To understand the result in simpler terms, let us reconsider the definition of the scaled Fourier transform given in (3.1). The first exponential exp iS(x) τ is a varying complex "sinusoid", whereas the second exponential exp − iu·x τ is a fixed complex sinusoid at frequency u τ . When we multiply these two complex exponentials, at low values of τ , the two sinusoids are usually not "in sync" and tend to cancel each other out. However, around the locations where ∇S(x) = u, the two sinusoids are in perfect sync (as the combined exponent is stationary) with the approximate duration of this resonance depending on det (H x ). The value of the integral in (3.1) can be increasingly closely approximated via the stationary phase approximation [22] as The approximation is increasingly tight as τ → 0. The power spectrum (P τ ) gives us the required result 1 obtained as a byproduct of two or more remote locations x k and x l indexing into the same frequency bin u, i.e, The cross phase factors when evaluated are equivalent to cos 1 τ , the limit of which does not exist as τ → 0. However, integrating the power spectrum over a small neighborhood N α (u) around u removes these cross phase factors as τ tends to zero and we obtain the desired result.

Formal Proof of Theorem We wish to compute the integral
at small values of τ and exhibit the connection between the power spectrum P τ (u) and the density function P (u). To this end define Ψ(x; u) ≡ S(x) − u · x. The proof follows by considering two cases: the first case in which there are no stationary points and therefore the density should go to zero, and the second case in which stationary points exist and the contribution from the oscillatory integral is shown to increasingly closely approximate the density function of the gradient as τ → 0.

case (i):
We first consider the case where N (u) = 0, i.e, u / ∈ ∇S(Ω). In other words there are no stationary points [22] for this value of u. The proof that this case yields the anticipated contribution of zero follows clearly from a straightforward technique commonly used in stationary phase expansions. We assume that the function S is sufficiently well-behaved on the boundary such that the total contribution due to the stationary points of the second kind [22] approaches zero as τ → 0. (Concentrating here on the crux of our work, we reserve the discussion concerning the behavior of S on the boundary and the relationship to stationary points of the second kind to Appendix B.) Under mild conditions (outlined in Appendix B), the contributions from the stationary points of the third kind can also be ignored as they approach zero as τ tends to zero [22]. Higher order terms follow suit.
Proof. To improve readability, we prove Lemma 3.3 first in the one dimensional setting and separately offer the proof for multiple dimensions. Here . The inverse function is guaranteed to exist due to the monotonicity of Ψ. Integrating by parts we get is well-defined. Choose m > d 2 (with this choice explained below) and for j ∈ {1, 2, . . . , m}, recursively define the function g j (x) and the vector field v j+1 (x) as follows: Using the equality where ∇· is the divergence operator, and applying the divergence theorem m times, the Fourier transform in (3.6) can be rewritten as which is similar to (3.8).
We would like to add a note on the differentiability of S which we briefly mentioned after Lemma 2.3. The divergence theorem is applied m > d 2 times to obtain sufficiently higher order powers of τ in the numerator so as to exceed the τ d 2 term in the denominator of the first line of (3.15). This necessitates that S is at least For a detailed exposition of the proof, we encourage the reader to refer to Chapter 9 in [22].
We then get P τ (u 0 ) = O(τ ). Since ∇S(Ω) is a compact set in R d and u 0 / ∈ ∇S(Ω), we can choose a neighborhood N η (u 0 ) around u 0 such that for u ∈ N η (u 0 ), no stationary points exist and hence Since the cardinality N (u) of the set A u defined in (2.1) is zero for u ∈ N η (u 0 ), the true density P (u) of the random variable transformation Y = ∇S(X) given in (2.4) also vanishes for u ∈ N η (u 0 ).

case (ii):
For u 0 / ∈ C, let N (u 0 ) > 0. In this case, the number of stationary points in the interior of Ω is non-zero and finite as a consequence of Lemma 2.1. We can then are obtained from Lemma 2.2. Firstly, note that the the set K contains no stationary points by construction. Secondly, the boundaries of K can be classified into two categories: those that overlap with the sets N η (x k ) and those that coincide with Γ = ∂Ω. Propitiously, the orientation of the overlapping boundaries between the sets K and each N η (x k ) are in opposite directions as these sets are located at different sides when viewed from the boundary. Hence, we can exclude the contributions from the overlapping boundaries between K and N η (x k ) while evaluating F τ (u 0 ) in (3.17) as they cancel each other out.
To compute G we leverage case (i) which also includes the contribution from the boundary Γ and get To evaluate the remaining integrals over N η (x k ), we take into account the contribution from the stationary point at x k and obtain for a continuous bounded function γ 1 (u) [22]. The variable a in (3.20) takes the value 1 if x k lies in the interior of Ω, otherwise equals 1 2 if x k ∈ ∂Ω. Since u / ∈ C, stationary points do not occur on the boundary and hence a = 1 for our case. Recall that σ x k is the signature of the Hessian at x k . Note that the main term has the factor τ from (3.19) and (3.21) respectively, Based on the definition of the power spectrum P τ in (3.2), we get where ǫ 4 (u 0 ) includes both the squared magnitude of ǫ 3 (u 0 , τ ) and the cross terms involving the first term in (3.22) and ǫ 3 (u 0 , τ ). Notice that the main term in (3.22) can be bounded independently of τ as . Furthermore, as ǫ 4 (u 0 , τ ) can be also be uniformly bounded by a function of u for small values of τ , we have Observe that the term on the right side of the first line in (3.24) matches the anticipated expression for the density function P (u 0 ) given in (2.4). The cross phase factors in the second line of (3.24) arise due to multiple remote locations x k and x l indexing into u. The cross phase factors when evaluated can be shown to be proportional to cos 1 τ . Since lim τ →0 cos 1 τ is not defined, lim τ →0 P τ (u 0 ) does not exist. We briefly alluded to this problem immediately following the statement of Theorem 3.2 in Section 3. However, the following lemma which invokes the inverse function ∇S 2 where x is written as a function of u-provides a simple way to nullify the cross phase factors. Note that since each ∇S (−1) x is a bijection, N (u) doesn't vary over N η (u 0 ). Pursuant to Lemma 2.2, the Hessian signatures σ x k (u) and σ x l (u) also remain constant over N η (u 0 ).

Lemma 3.4. [Cross Factor
Nullifier Lemma] The integral of the cross term in the second line of (3.24) over the closed region N η (u 0 ) approaches zero as τ → 0, i.e, ∀k = l The proof is given in Appendix A. Combining (3.26) and (3.27) yields .28 demonstrates the equivalence of the cumulative distributions corresponding to the densities P τ (u) and P (u) when integrated over any sufficiently small neighborhood N η (u 0 ) of radius η. To recover the density P (u), we let α < η and take the limit with respect to α to complete the proof. Taking a mild digression from the main theme of this paper, in the next section (Section 4), we build an informal bridge between the commonly used characteristic function formulation for computing densities and our wave function method. The motivation behind this section is merely to provide an intuitive reason behind our main theorem (Theorem 3.2) where we directly manipulate the power spectrum of φ(x) = exp iS(x) τ into the characteristic function formulation stated in (4.2), circumventing the need for the closed-form expression of the density function P (u) given in (2.4). We request the reader to bear in mind the following cautionary note. What we show below cannot be treated as a formal proof of Theorem 3.2. Our attempt here merely provides a mathematically intuitive justification for establishing the equivalence between the power spectrum and the characteristic function formulations and thereby to the density function P (u). On the basis of the reasons described therein, we strongly believe that the mechanism of stationary phase is essential to formally prove our main theorem (Theorem 3.2). It is best to treat the wave function and the characteristic function methods as two different approaches for estimating the probability density functions and not reformulations of each other. To press this point home, we also comment on the computational complexity of the wave function and the characteristic function methods at the end of the next section.
4. Relation between the characteristic function and the power spectrum formulations of the gradient density. The characteristic function ψ Y (ω) for the random variable Y = ∇S(X) is defined as the expected value of exp (iω · Y ), namely Here 1 µ(Ω) denotes the density of the uniformly distributed random variable X on Ω. The inverse Fourier transform of a characteristic function also serves as the density function of the random variable under consideration [2]. In other words, the density function P (u) of the random variable Y can be obtained via Having set the stage, we can now proceed to highlight the close relationship between the characteristic function formulation of the density and our formulation arising from the power spectrum. For simplicity, we choose to consider a region Ω that is the product of closed intervals, Based on the expression for the scaled Fourier transform F τ (u) in (3.1), the power spectrum P τ (u) is given by Define the following change of variables, Then, and the integral limits for ω and ν are given by where ω i is the i th component of ω. Note that the Jacobian of this transformation is τ d . Now we may write the integral P τ (u) in terms of these new variables as The mean value theorem applied to S ν + τ ω When ω is fixed and τ → 0, z(ν, ω) → ν and so for small values of τ we get Again we would like to drive the following point home. We do not claim that we have formally proved the above approximation. On the contrary, we believe that it might be an onerous task to do so as the mean value theorem point z in (4.10) is unknown and the integration limits for ν directly depend on τ . The approximation is stated with the sole purpose of providing an intuitive reason for our theorem (Theorem 3.2) and to provide a clear link between the characteristic function and wave function methods for density estimation. Furthermore, note that the integral range for ω depends on τ and so when ω = O 1 τ , ωτ → 0 as τ → 0 and hence the above approximation for ξ(ω, u) in (4.12) might seem to break down. To evade this seemingly ominous problem, we manipulate the domain of integration for ω as follows. Fix an ǫ ∈ (0, 1) and let where (4.14) By defining M i as above, note that in W τ , ω is deliberately made to be O(τ ǫ−1 ) and hence ωτ → 0 as τ → 0. Hence the approximation for ξ(ω, u) in (4.12) might hold for this integral range of ω. For consideration of ω ∈ W ∞ , recall that Theorem 3.2 requires the power spectrum P τ (u) to be integrated over a small neighborhood N α (u 0 ) around u 0 . By using the true expression for ξ(ω, u) from (4.9) and performing the integral for u prior to ω and ν, we get Since both M i in (4.14) and the lower and the upper limits for ω i , namely ± bi−ai τ respectively approach ∞ as τ → 0, the Riemann-Lebesgue lemma [4] guarantees that ∀ω ∈ W ∞ , the integralN approaches zero as τ → 0. Hence for small values of τ , we can expect the integral over W τ to dominate over the other. This leads to the following approximation, as τ approaches zero. Combining the above approximation with the approximation for ξ(ω, u) given in (4.12) and noting that the integral domain for ω and ν approaches R d and Ω respectively as τ → 0, the integral of the power spectrum P τ (u) over the neighborhood N α (u 0 ) at small values of τ in (4.8) can be approximated bŷ This form exactly coincides with the expression given in (4.2) obtained through the characteristic function formulation.
The approximations given in (4.12) and (4.17) cannot be proven easily as they involve limits of integration which directly depend on τ . Furthermore, the mean value theorem point z(ν, ω) in (4.10) is arbitrary and cannot be determined beforehand for a given value of τ . The difficulties faced here emphasize the need for the method of stationary phase to formally prove Theorem 3.2.
As we remarked before, the characteristic function and our wave function methods should not be treated as mere reformulations of each other. This distinction is further emphasized when we find our method to be computational efficient over the characteristic function approach in the finite sample-set scenario where we estimate the gradient density from a finite N samples of the function S. Given these N sample valuesŜ and its gradient ∇Ŝ, the characteristic function defined in

Discussion. Observe that the integrals
give the interval measures of the density functions P τ and P respectively. Theorem 3.2 states that at small values of τ , both the interval measures are approximately equal, with the difference between them being O( √ τ ). Recall that by definition, P τ is the normalized power spectrum of the wave function φ(x) = exp iS(x) τ . Hence we conclude that the power spectrum of φ(x) can potentially serve as a joint density estimator for the gradient of S at small values of τ . We also built an informal bridge between our wave function method and the characteristic function approach for estimating probability densities by directly trying to recast the former expression into the latter. The difficulties faced in relating the two approaches reinforce the stationary phase method as a powerful tool to formally prove Theorem 3.2. Our earlier result proved in [8] where we employ the stationary phase method to compute the gradient density of Euclidean distance functions in two dimensions, is now generalized in Theorem 3.2 which establishes similar gradient density estimation result for arbitrary smooth functions in arbitrary finite dimensions.
n, from continuity it follows that ∇S(x 0 ) = u and hence x 0 ∈ A u . Let p n ≡ x n − x 0 and h n ≡ pn pn . Then where the linear operator H x0 is the Hessian of S at x 0 (obtained from the set of derivatives of the vector field ∇S : R d → R d at the location x 0 ). As ∇S(x n ) = ∇S(x 0 ) = u and H x0 is linear, we get Since h n is defined above to be a unit vector, it follows that H x0 is rank deficient and det (H x0 ) = 0. Hence x 0 ∈ B and u ∈ C resulting in a contradiction.

Proof of Neighborhood Lemma
Proof. Observe that the set B defined in (2.2) is closed because if x 0 is a limit point of B, from the continuity of the determinant function we have det (H x0 ) = 0 and hence x 0 ∈ B. Being a bounded subset of Ω, B is also compact. As ∂Ω is also compact and ∇S is continuous, C is compact and hence R d − C is open. Then for u 0 / ∈ C, there exists an open neighborhood N r (u 0 ) for some r > 0 around u 0 such that N r (u 0 ) ∩ C = ∅. By letting η = r 2 , we get the required closed neighborhood N η (u 0 ) ⊂ N r (u 0 ) containing u 0 .
Since det (H x ) = 0, ∀x ∈ A u0 , points 1, 2 and 3 of this lemma follow directly from the inverse function theorem. As |A u0 | is finite by Lemma 2.1, the closed neighborhood N η (u 0 ) can be chosen independently of x ∈ A u0 so that points 1 and 3 are satisfied ∀x ∈ A u0 . In order to prove point 4, note that the eigenvalues of H x are all non-zero and vary continuously for x ∈ N η (x). As the eigenvalues never cross zero, they retain their sign and so the signature of the Hessian stays fixed.

Proof of Density Lemma
Proof. Since the random variable X is assumed to have a uniform distribution on Ω, its density at every location x ∈ Ω equals 1 µ(Ω) . Recall that the random variable Y is obtained via a random variable transformation from X, using the function ∇S. The Jacobian of ∇S at a location x ∈ Ω equals the Hessian H x of the function S at x. Barring the set C corresponding to the union of the image (under ∇S) of the set of points B (where the Hessian vanishes) and the boundary ∂Ω, the density of Y exists on u ∈ R d − C and is given by (2.4). Please see well known sources such as [2] for a detailed explanation.
For the sake of completeness we explicitly prove the well-known result stated in Integral Lemma 3.1.

Proof of Integral Lemma
Proof. Define a function H(x) by Then, Letting v = u τ and G(v) = F τ (u), we get As f is ℓ 1 integrable, by Parseval's Theorem (see [4]) we havê By noting thatˆ| we get the desired result, namelyˆ| Here n is the unit outward normal to the positively oriented boundary ∂N η (u 0 ) parameterized by v. In the right side of (A. 12), notice that all terms inside the integral are bounded. The factor τ outside the integral ensures that , tantamount to a stationary point of the first kind [22]. If so, we need to account for the contribution from the boundary which could in effect invalidate our theorem and therefore our entire approach. However, the condition for the infinite occurrence of stationary points of the second kind is so restrictive that for all practical purposes they can be ignored. If the given function S is well-behaved on the boundary in the sense explained below, these thorny issues can be sidestepped. Furthermore, as we will be integrating over u to remove the cross-phase factors, it suffices that the aforementioned finiteness condition be satisfied for almost all u instead of for all u.
Let the location x ∈ Γ be parameterized by the variable y, i.e., x(y). Let Q(x) denote the Jacobian matrix of x(y) whose (i, j) th entry is given by Stationary points of the second kind occur at locations x where ∇Ψ(x(y); u) = 0 which translates to This leads us to define the notion of a well-behaved function on the boundary. Definition B.1. A function S is said to be well-behaved on the boundary provided (B.2) is satisfied only at a finite number of boundary locations for almost all u ∈ C. Definition B.1 immediately raises the following questions: (i) Why is the assumption of a well behaved S weak ? and (ii) Can the well-behaved condition imposed on S be easily satisfied in all practical scenarios? Recall that the finiteness of premise (B.2) entirely depends on the behavior of the function S on the boundary Γ. Scenarios can be manually handcrafted where the finiteness assumption is violated and (B.2) is forced to satisfy at all locations. Hence it is meaningful to ask: What stringent conditions are required to incur an infinite number of stationary points on the boundary? We would like to convince the reader that in all practical scenarios, S will sustain only a finite number of stationary points on the boundary and hence it is befitting to assume that the function S is well-behaved on the boundary. The reader should bear in mind that our explanation here is not a formal proof but an intuitive reasoning of why the well-behaved condition imposed on S is reasonable.
To streamline our discussion, we consider the special case where the boundary Γ is composed of a sequence of hyper-planes as any smooth boundary can be approximated to a given degree of accuracy by a finite number of hyper-planes. On any given hyperplane, Q(x) remains fixed. Recall that from the outset, we omit the set C (i.e., u / ∈ C) which includes the image under ∇S of the boundary Γ = ∂Ω. Hence ∇S = u for any point x ∈ Γ for u / ∈ C. Since the rank of Q is d−1 and ∇S −u is required to be orthogonal to all the d − 1 rows of Q for condition B.2 to hold, ∇S − u is confined to a 1-D subspace. So if we enforce ∇S to vary smoothly on the hyperplane and not be constant, we can circumvent the occurrence of an infinite number of stationary points of the second kind for all u. Additionally, we can safely disregard the characteristics of the function S at the intersection of these hyperplanes as they form a set of measure zero. To press this point home, we now formulate the worst possible scenario where ∇S is a constant vector t. Let D denote a portion of Γ where ∇S = t. Let u = u 0 and u = u 1 result in infinite number of stationary points of the second kind on D.
As ∇S − u is limited to a 1-D subspace, we must have t − u 1 = λ(t − u 0 ) for some λ = 0, i.e, u 1 = (1 − λ)t + λu 0 . So in any given region of Γ, there is at most a 1-D subspace (measure zero) of u which results in an infinite number of stationary points of the second kind in that region. Our well-behaved condition is then equivalent to assuming that the number of planar regions on the boundary where ∇S is constant is finite.
The boundary condition is best exemplified with a 2D example. Consider a line segment on the boundary x 2 = mx 1 + b. Without loss of generality, assume the parameterization y = x 1 . Then Q = 1 m . Equation B.2 can be interpreted as S 1 + mS 2 = u 1 + mu 2 where S i = ∂S ∂xi . So if we plot the sum S 1 + mS 2 for points along the line, the requirement reduces to the function S 1 + mS 2 not oscillating an infinite number of times around an infinite number of ordinate locations u 1 + mu 2 . It is easy to see that the imposed condition is indeed weak and is satisfied by almost all smooth functions allowing us to affirmatively conclude that the enforced well-behaved constraint (B.1) does not impede the usefulness and application of our wave function method for estimating joint density of the gradient function.