Functionals and Functional Derivatives of Wave Functions and Densities

It is shown that the process of conventional functional differentiation does not apply to functionals whose domain (and possibly range) is subject to the condition of integral normalization, as is the case with respect to a domain defined by wave functions or densities, in which there exists no neighborhood about a given element in the domain defined by arbitrary variations that also lie in the domain. This is remedied through the generalization of the domain of a functional to include distributions in the form of ( ) ′ r r δ − , where ( ) ′ r r δ − is the Dirac delta function and  is a real number. This allows the determination of the rate of change of a functional with respect to changes of the independent variable determined at each point of the domain, with no reference needed to the values of the functional at different functions in its domain. One feature of the formalism is the determination of rates of change of general expectation values (that may not necessarily be functionals of the density) with respect to the wave functions or the densities determined by the wave functions forming the expectation value. It is also shown that ignoring the conditions of conventional functional differentiation can lead to false proofs, illustrated through a flaw in the proof that all densities defined on a lattice are v -representable. In a companion paper, the mathematical integrity of a number of long-standing concepts in density functional theory are studied in terms of the formalism developed here.


Introduction
Functionals and functional derivatives [1] [2] exhibit a ubiquitous presence in mathematical physics, from the A. Gonis calculus of variations to field theories.At the same time, this seems to generate the impression that the concept of functional differentiation applies only to functionals, as indeed that of ordinary differentiation applies to functions of coordinates.In the latter case, clearly one does not have a derivative unless one has present a function to which the procedure of differentiation can be meaningfully applied.The central result of this paper is that functional differentiation can also be interpreted as the rate of change of any quantity, the (dependent variable), that depends on a function of coordinates, the (independent variable), regardless of whether or not the dependent variable satisfies the rigorous definition of a functional of the independent variable.
Functional or not, the rate of change defines the change induced in the dependent variable with respect to a change at a single point in the domain of its independent variable and in this interpretation can be applied to a single pair, ( ) [ ] ( ) , where ( ) f r , the independent variable, is a function of coordinates, and [ ] F f , the dependent variable, is determined through knowledge of ( ) f r .In this form, the formalism developed here is applicable to a wave function or a single expectation value of an operator with respect to a wave function even if the wave function and expectation values are not parts of functionals.
Indeed, the concept of the derivative as rate of change is well known from the calculus: The derivative of a function, say ( ) 0 y x ′ , at a given point, 0 x , in its domain of definition gives the rate of change of the function at 0 x with respect to changes in its domain in the neighborhood of 0 x .In this case, the only way a change can be induced in the function is through a change in the variable (parameter), and the derivative takes the usual form of a limit, of the ratio of the difference in the function corresponding to two different points in the domain, to the difference between the two corresponding points in the domain, as the latter difference is allowed to vanish.In this case, the rate of change is synonymous with that of derivative.This exhibits the requirement that every point in the domain of ( ) y x is surrounded by a possibly small but finite neighborhood throughout which the function is continuously and uniquely differentiable.In the absence of such a neighborhood, the function may not possess a derivative ( ( ) 2 y x x = where x is an integer, positive, negative or zero is a case in point), and the concept of rate of change, as well as of the derivative, becomes moot.
A characteristic feature of the derivative is the connection through the definition in (0.1) of the value of the function at 0 x x + ∆ to its value at 0 x through the first order term in a Taylor series expansion, in the limit, 0 x ∆ → , ( ) ( ) ( ) Analogously to a function, a functional is defined as a set of ordered pairs, , that maps the independent variable, ( ) f r , an element of a normed linear space of functions (a Banach space), , to the field of real or complex numbers, through a well-defined mathematical procedure, e.g., an integral, As for ordinary functions, the collection of first entries, ( ) f r , forms the domain of the functional, while the set of second entries, [ ] F f , forms its range.Examples of more general definitions of the range of a functional are given in the following.We consider only single-valued functionals in which no two elements in the range correspond to the same element in the domain.
Again in analogy with ordinary differentiation, the functional derivative, [ ] ( ) where ( ) φ r is arbitrary and ( ) ( ) This definition hinges upon the existence of a small, albeit finite neighborhood of functions, ( ) φ r , about each point (function), ( ) f r , in the domain of the functional such that every function, ( ) ( ) r , belongs to the domain.Designating the domain of a functional with the symbol, D f , we fave that for any ( ) . (The concept of arbitrary ( ) φ r is understood under the obvious requirement that all relevant integrals must be well-defined.) The need for arbitrariness of the test function, ( ) φ r , and the consequences of its absence are discussed in

Appendix.
This formalism is immediately applicable to functionals of the form, written as explicit functions of the coordinates, the independent variable and its derivative.For reference purposes and to distinguish such functionals from more general structures that enter the discussion in this paper, we will refer to them as functionals of form.In the integral above, 3 L Ω = is the volume of a cube of side, L , that, in general, can become arbitrarily large.
Beginning with the next section, we examine the analytic properties of derivatives of functionals as compared to those of a function.

Functional Derivatives
In view of (0.4), the change in the functional under a continuous change in the independent variable, takes the form , summed (integrated point by point) over the whole range of r values [2].It follows that [ ] ( ) is the rate of change of the functional,

[ ]
F f , with respect to the change in the independent variable at each point in its domain.This rate of change multiplied by a change at each point, r , of the independent variable that lies in the domain of the functional, This is analogous to the change in a function, ( ) y x , induced by a small (infinitesimal) change in the independent variable, x x x → + ∆ , that from (0.2), (first term in the Taylor series of a differentiable function), takes the form, The change in the function at x equals its rate of change at x times a difference between two points in its domain at x .
The analogous feature in the case of functionals rests on the existence of a small neighborhood of functions about a value (a function) in the domain of the functional in which the functional is uniquely and continuously differentiable through (0.4).In that case, there exists a neighborhood defined by small but arbitrary changes in the independent variable that are elements of the domain of the functional, so that the procedure in (0.4) is well-defined.As in the case of a function, the change in the functional induced by change in the independent variable takes the form, (first term in a Taylor series-like expansion of the functional), where ( ) δ r is an arbitrary (albeit small) change in the function ( ) f r that is defined throughout space such that ( ) ( )

Functionals over Domains Constrained by Normalization
In the study of interacting N-particle systems, a density, ( ) n r , describes the probability of a particle found at position, r , and consequently must be non-negative, ( ) , and exhibit integral normalization, The normalization is also a formal requirement of the definition of the density in terms of the wave function describing the configurations of the system.For the case of electrons, our interest in this work, the system of N particles is described by a wave function, ( ) , of N single-particle coordinates that is antisymmetric with respect to interchange of any coordinate pair (and spins, for electrons).In terms of a wave function, the density takes the form, , , , d d .
Under the general condition of unit normalization for the wave function, ( ) the normalization condition, Equation (0.9), follows.We now examine the determination of functional derivatives of functionals over the domain of densities, i.e., whose elements are subject to the condition of integral normalization.

Derivatives with Respect to the Density
As an illustrative example, consider the linear functional, of Equation (0.3), defined over the set of densities normalized to N Assuming the existence of an arbitrary function, ( ) φ r , such that ( ) ( ) r is a also a density normalized to N , the definition in (0.  Unfortunately, in the present case, the derivation of the last equality is invalid. When the domain of the functional is constrained to integral normalization, the test function, ( ) φ r , cannot be arbitrary, but must satisfy the condition, or the changed function, ( ) ( ) r , can fail the condition of integral normalization and thus lies outside the domain of the functional.In this case, neither the derivative nor its associated rate of change can seemingly be defined.Crucially, it is no longer possible to obtain the value of a functional at a function, ( ) ( ) its value at ( ) n r through a functional Taylor series of the type indicated in (0.8).The difficulties associated with functional derivatives with respect to the density under the condition of integral normalization of the differentiating quantity have already been noted in the literature [4] (and references therein), with an attempt made at rectification.The formalism rests on attempts to obtain the change in the functional under a change in the independent variable through a modified functional Taylor-series-like expression.The formalism overlooks the cases in which the derivative is sought for isolated cases of a dependent variable and its associated independent variable in which the vary concept of a Taylor series become inapplicable.
Although the condition in (0.15) is characteristic of domains restricted to integral normalization, the rate of change of functionals defined over such domains can still be rigorously defined.

Inherent Property of Rate of Change
For reasons mentioned below, it is instructive to consider the identity functional,  f r , at point, r , given by the integrant in the last expression.In this sense, the Dirac delta function can be viewed as a response function, or susceptibility of free space, connecting the response of a medium (here coordinate space) to a perturbation distributed over the space.Now, using the definition of the functional derivative, Equation (0.4), and for general domains, unconstrained by normalization, we obtain, from which follows the result, We note that the identical result is obtained through the procedure, ( Before turning to the justification for this definition of the functional derivative, and its associated rate of change, we note some important properties: The rate of change of the identity functional is independent of the particular function in the domain of the functional and is thus an inherent property of the functional.Most importantly, the rate of change (the ratio, ( ) ( ) , of the change, ( ) δ r , at r , induced through a change, ( ) f r and hence requires no knowledge of a particular function, or of a different function in the domain of the functional.The functional derivative (interpreted as the rate of change) is thus an inherent property of the function at each point of the domain of the identity functional, and provides no connection to the change at a different point of the functional through a Taylor-series-like expansion.Consequently, the rate of change concept emerges as a fundamental, inherent property of functionals that may be obtained through conventional functional differentiation but exists even in cases where such differentiation is blocked.Specifically, it is applicable to each function in the domain of the functional, removing the need for the use of a test function.
We can now write the general expression, for all cases, irrespective of normalization requirements.Appendix establishes this general result along intuitive grounds.
It remains to establish that the definition of Equation (0.19) indeed corresponds to the derivative of the identity functional.

Functionals Over Delta-Function Distributions
In formal mathematics, the Dirac delta function is an example of a distribution (or generalized function) that lies outside the realm of ordinary functions [5].The Dirac delta function cannot be obtained as the difference between two functions, or generally as a linear superposition of ordinary functions.Alternatively, no difference between two functions can be confined to a single point (a set of zero measure).Yet, functional derivatives, even in the conventional sense, encode the relative response of the functional due to a change at a single point of the function in the domain of the functional.This discussion motivates the following generalization of the domain of any functional that leads to the direct determination of a rate of change, irrespective of whether or not this can be obtained through conventional functional differentiation.This is accomplished by extending the domain, D f , to the set, the union of the domain of the functional and distributions proportional to the Dirac delta function.A schematic representation of this generalization is given in Figure 1.
Let one of the axes in a three-dimensional Cartesian coordinate system represent the functions in D f and let an axis perpendicular to it represent the values of a functional, [ ] F f .The third axis represents the distributions, , where ω is a real number and the coordinates, r and ′ r , are arbitrary.The value, 0 ω = , designates the domain, D f .The identity functional now takes the generalized form, The generalized identity functional can be expressed in the language of a function mapped onto itself, where the integral is evaluated through the recursion relation for delta functions [6], along with the property, ( r r , and ω is a real number associated with a single-point, ′ r , in space.This feature is often described by replacing ω with the infinitesimal, ( ) n δ ′ r .Note that now, the constraint of integral normalization no longer applies to the identity functional.Thus, in the domain that includes distributions, we have, where ( ) f r remains fixed.The main property of this result is that it is independent of ω , so that it can be applied to the axis 0 ω = designating the domain of the functional, D f .At any point on that axis (i.e., for any ), the derivative defining the rate of change of any functional over a domain, irrespective of normalization restrictions, now takes the following equivalent forms, f ′ ∆ r is an infinitesimal (but generally a real number).The definition in (0.27) remains valid in all cases, even when the domain of a functional, D f , is required to satisfy the condition of integral normalization.
The general validity of this definition follows because the derivative of any function with respect to itself is not a function, but a distribution.As such it lies outside the domain of functions (that may be constrained by normalization).By writing ( ) f δ r not explicitly as a difference of two functions, but rather as the derivative with respect of a function with respect to itself times a vanishing infinitesimal, we accomplish the variation of the functional without the need to consider differences of functions in its conventional domain.
Furthermore, the definition is applicable to all functionals of form because the derivative in these cases can be expressed as the ordinary derivative of a differentiable function times the functional derivative of the identity.In fact, it is applicable to isolated single pairs of the form (independent variable, dependent variable) that may not form part of a functional (collection of pairs).The significance of these results is highlighted below.

Derivatives of Differentiable Functionals
The following is a well-known property of functional differentiation.Let a functional of form, [ ] F f , correspond to a function of the parameter, f , that is differentiable in the ordinary sense with respect to f , so that the quantity, ( ) , is well defined.Then, the functional derivative takes the form, [ which justifies the general result in (0.27).
Although the formalism just completed provides a justification of the derivative given by (0.27), this result is already freely used in the literature [7].
A simple example illustrates the point.Given the functional, , using the rule established in (0.28), we obtain, This is an example of the rule of parametric differentiation that leads to the rate of change of any functional of form (that is written explicitly in terms of the independent variable): Differentiate the function,

[ ]
F f , with f treated as a parameter, evaluate the derivative at ( ) f r and affix the Dirac delta function to the result.The rule is readily extended to compound functions of a function, as well to cases when such functions occur in expressions under integral signs.

Summary
The rate of change of a functional with respect to the change of the independent variable at one point in space is an inherent property of a single point in the functional (a single ordered pair, (independent variable, dependent variable)), and in every case can be defined without reference to the value of the functional, or the independent variable, at different ordered pairs.As shown below, this allows the definition of the rate of change of quantities dependent on functions even if the dependence extends only to a single pair, , that may or may not be part a functional.However, a Taylor-series like representation of the functional at at points close to a given independent variable can only be defined through the integral in (0.8) if the domain of the functional admits all possible infinitesimal neighborhoods about each point in the domain that differ arbitrarily from that point and throughout which the functional is continuously differentiable.In that case, the rate of change can be used to determine the value of the dependent variable at points (functions) near the function where the rate of change has been defined.
At the same time, the lack of a Taylor series representation is of no consequence in cases where the functional consists of a single pair of independent variable and its associated dependent variable, where the concept of a Taylor series becomes moot.As pointed out in the following discussion, far from being a limitation, this feature is consistent with the analytic properties of wave functions as well as with the manner in which the study of nature proceeds in terms of non-interacting systems (see following sections and comment in the Discussion section).

Functionals of Wave Functions and Densities
We consider the case in which the domain of a functional is required to satisfy a set of additional conditions such as that of normalization to an integer, as in the case of the wave functions of a many-particle Hilbert space and the corresponding densities obtained from them, Equations (0.9), (0.10) and (0.11).Now, there exists no neighborhood about a particular density such that arbitrary variations of the form, ( , the quantity ( ) ( ) , is normalized to a fractional number failing the definition of a density.) In this case, however, the concept of functional derivative as a rate of change remains valid and applicable where it retains all the properties attending to the rate of change such as assessments of the value of a quantity at a particular point with respect to its minimum based on the value of the rate of change at that point.The remainder of the paper is devoted to the exploitation of this feature and the derivation of formal results resting on it.
Two more advantages emerge.First, the rate of change of an expression such as an expectation value of an operator with respect to a wave function (defining the expectation value), or the density (defined by the wave function) can be obtained irrespective of whether or not the expectation value is a functional of the wave function or the density.
The second advantage is concerned with expressions, possibly functionals of wave functions or the density, that do not exhibit explicitly the independent variable, but are non-the-less dependent on it.Such expressions are not defined as mere functions of the independent variable, functionals of form, but rather by means of a procedure based on the independent variable (see Section 7.2).We shall refer to such functionals as functionals of process and in the following, we develop the general formalism for their derivatives (rates of change).
We generalize the concept of independent and dependent variable to apply to any ordered pair of the form, , where ( ) is a function of a multi-dimensional coordinate space, such as a wave function or a density, and [ ] S s is a quantity that is determined through a procedure that depends exclusively on ( ) but may not necessarily be written explicitly in terms of ( ) In all that follows, we seek to determine the rate of change of functionals (or generally expectation values that are not necessarily functionals over wave functions or densities) with respect to changes at one point of the independent variable.We identify cases in which [ ] S s , the dependent quantity can be expressed as a mere function of its independent variable, ( ) r , and differentiate based on the procedure of parametric differentiation derived from the identity in (0.29).In each case, we express the dependent quantity (functional or not) in terms of the independent variable and proceed to obtain its rate of change with respect to that variable at a given, fixed independent variable.

Functional Derivatives with Respect to Potential
The first demonstration of parametric differentiation (functional differentiation through (0.29)), is to determine the derivative, ( ) ( ) , where ( ) j f r is an eigensolution of the single-particle Schrödinger equation for a potential, ( ) where j  is the corresponding eigenvalue.The evaluation of the derivative is most conveniently carried out on the basis of the integral representation of the Schrödinger equation, the Lippmann-Schwinger equation at energy, E , where ( ) χ r is the solution in the absence of the potential, and encodes the boundary conditions (behavior at infinity).We bypass the question whether or not these solutions define functionals of the potential; we use only the fact that they are parametrically dependent on it, and we seek the rate of change of the solution with respect to the change in the potential at a given point.The parametric dependence of the solution on the potential is exhibited through the iterative solution of the Lippmann-Schwinger equation, Analogously, the evaluation of ( ) ( ) is obtained as an iterative solution of the expression, ) ( )   .i (0.36) In the limit, .
It is worth emphasizing that the last expression for the derivative (the rate of change of ( ) j f r at a point, r , with respect to a change in the potential at ′ r ) is obtained strictly in terms of the solution for the given potential, with no requirement of considering the solutions at different potentials.Consistent with the result established above, the functional derivative in (0.37) is confined to a single Hilbert space, the Hilbert space defined by the potential, ( ) v r (and the number of particles under ( ) v r ).

Derivatives of Expectation Values
Consider a generally complex, many-particle wave function, ( ) , in multidimensional coordinate space, subject to the normalization condition, and the expectation value, for an operator, , in the space.
For a fixed , the integral in (0.39) defines a functional of ( ) ( ) , or equivalently of , in which the independent variable, , ranges over all multidimensional functions that obey the normalization condition in (0.38) and for which the integral in (0.39) is well defined.
We seek to determine the functional derivative, is arbitrary.At this point, however, the operation of conventional functional differentiation runs into an insurmountable barrier.
Even though the functional is one of form, exhibiting explicitly the independent variable, the requirement of arbitrariness may cause the quantity, , to violate the condition of normalization thus failing to satisfy the definition of a many-particle wave function.
The functional derivative interpreted as a rate of change can now be obtained, however, through a generalization of the concept of parametric derivative to multi-dimensional space.In this procedure, using the definition of the multidimensional Dirac function and its properties under integration, and where The feature that allows differentiation of expectation values is clear: The quantity, ( ) δ ′ −  r r , can be interpreted as the difference between two elements in the domain of the functional obtained at a single (possibly multi-dimensional) point!In this sense, the use of the Dirac delta function yields the correct functional de-possesses a well-defined functional derivative with respect to the density.This derivative is obtained through knowledge of the density alone, with no reference made to wave functions leading to other densities.
The set of all possible expectation values of many-particle operators with respect to the elements of n S Ψ→ forms also a multivalued functional of the density.
Although rigorous, Cioslowski's formal procedure for the derivative is computationally out of reach.Here, we develop an equally rigorous alternative technique for differentiating the elements of n S Ψ→ with respect to the density.Not unexpectedly, the method rests on the concept of parametric differentiation.
We recall the basic requirement of parametric differentiation of one function by another: The symbol, ( ) ( ) , encodes the relative rate of ( ) f r at a point, r , induced through the change of the function

( )
g r at a generally different point.The expression, ( ) f δ r , does not refer to the difference between two functions throughout space; rather it is a number (an infinitesimal that is allowed to vanish in a limiting process) describing the change in a given function at a given point.A similar description applies to ( ) Therefore, as stated above, the expression, ( ) ( ) , has manifest meaning if and only if the coordinate dependence of the function ( ) f r can be mapped onto an analytic (differentiable) function of ( ) g r , (e.g., ( ) ( ) ( ) r r , say).In that case, the derivative, ( ) ( ) , proceeds by applying the expression for the derivative of the functional identity, (0.18), to the expression providing an exact representation of ( ) throughout space in therms of its parametrization by ( ) g r .The following point is possibly both self-evident as well as subtle: There is nothing in the form of a function (its dependence on coordinates) that betrays its functional dependence on a particular density.A given function, (e.g., ( ) sin A ⋅ k r ), can be part of the orbitals forming various Slater determinants (an infinite number), each leading to a different density.In determining the functional derivative (parametric derivative) of an expectation value with respect to one of these densities, the derivative of the particular function must be obtained with respect to the density associated with that determinant.Whether or not the derivative of this function exists with respect to a different density, associated with a different Slater determinant and a different expectation value is inconsequential to the proceedings confined to a given density [11], so that the various parametric derivatives with respect to different densities are unrelated to one another, and cannot be connected through parametric differentiation.Now, in the case in which a function is to be differentiated with respect to a given density, there exists an immediate and exact mapping of its dependence on coordinates to an analytic, differentiable form that explicitly exhibits the density.It is an expansion in the equidensity basis [12].

Equidensity Basis
For the sake of completeness, in this subsection, we introduce the spin variable [13] [14].
An exact mapping of a function, ( ) j f σ r (say an orbital arising from the solution of a single-particle Schrö- dinger equation), onto an analytic (differentiable) expression can be had in terms of the expansion of the function in terms of the equidensity basis.Namely, for each orbital, ( ) j f σ r , that as an element of a Slater determi- nant contributes to the formation of a density, with the sum ranging over some collection of orbitals, we write, being the elements of an orthonormal and complete basis [15] [16] [12], the equidensity basis, where x n x y z n N y z n . The transformation, → r R, maps [12] three-dimensional coordinate space onto the volume of a cube of side 2π with the points at infinity mapped onto the surface of the volume [12].Note that the j R are written explicitly in terms of the density through a function of the density that is uniquely dif- ferentiable in terms of the densety based on (0.18) that now takes the form, In accordance with the closing remarks in the previous section, an orbital may contribute to a number of different densities (in principle an infinite number) ant thus possesses unique parametric derivatives with respect to each of these densities.
The particular choice of 1 2 3 , , R R R displayed above is not unique, with other choices, e.g.permuting co- ordinates or other coordinate systems, being possible [17] as well.However, given the uniqueness of the derivative, the different choices lead to identical results.In the following we use the definitions given in equations (0.52) and (0.53).
The coefficients, and are generally complex numbers.These coefficients change according to the density used to construct the equidensity basis.That density, on the other hand, is uniquely chosen as the density to which a given orbital may be contributing through (0.49), with no connection existing between the coefficients at one expansion (some density) to those at another.(It may be tempting to think of the equidensity basis as a functional of the density concluding that the coefficient for a given orbital at one density may be connected through functional differentiation to coefficients of the same orbital of another density.Recall, however, that the very concept of conventional functional differentiation is disallowed over the domain of densities, and the only differentiation available is that of parametric differentiation confined to the space of given density.Furthermore, any general function of coordinates, one that is not necessarily a solution of a Sxhrödinger equation and hence not a functional of a density can be expanded in terms of the equidensity basis for any density, each expansion defined in terms of coefficients that have no functional connection to those at another density.It follows that the question of the derivatives of the coefficients is mathematically moot.)Finally, the parametric derivative of an expectation value with respect to a density formed by a wave function that leads to that density can be determined through the parametric derivative of the expansion of the wave function in the equidensity basis for that density.No connection of that derivative to that at

Functional Derivatives of Ensembles
The functional derivative of Ô with respect to the density is given through the parametric derivative, by applying the universal, density-independent change, ( ) ( ) δ r is an infinitesimal in- dependent of i , to each of the terms in (0.69), and obtain the derivative with respect to the densities defined by each of the individual wave functions, In the last expression, each differentiation is performed at the density defined by the individual states, i Ψ , through expansions of corresponding orbitals in terms of the equidensity basis at that particular density, and subsequent (parametric) differentiation of the expansion.In short, the functional derivative (rate of change) of an ensemble of states is the ensemble of the rates of change of each of the states in the ensemble.

The v-Representability of Densities Defined on a Lattice
In a well-known paper [18], Kohn purports to show that a density defined on a lattice is v -representable, i.e., it can always be obtained from the wave function of the ground state of an interacting system defined on a discrete set of points (lattice) confined into a box, with vanishing conditions on the potential and the density on the sides of the box.
Kohn's proof relies on the following theorem: The proof hinges on the existence of a small neighborhood around a given density throughout which derivatives with respect to the density can be uniquely obtained in a continuously differentiable manner.
The existence of such a neighborhood guarantees the existence of uniquely defined Frèchet derivatives.As already stated above, however, no such neighborhood exists for the case of domains defined in terms of functions that satisfy the definition of a density.
The flaw in the proof is evident in the expression of the theorem.The function,

( )
m r , cannot be both arbitrary and also required to integrate to zero.Clearly, unless it has this property, then ( ) n′ r is not a density, but if it does, then is not only non-trivial, it precludes the carrying out of functional differentiation altogether and negates the proof of the theorem.(This does not, necessarily, disprove the theorem, however, which may indeed be true but would require an alternative proof.)

Discussion
The central result of this paper is the use of the Dirac delta function, rather than an arbitrary test function defined over an extended domain in coordinate space, as in the conventional formulation, in the determination of functional derivatives of quantities whose determination depends on a function of coordinates.In this procedure, the formalism leads to the rate of change of a dependent variable with respect to the change at one point of the independent variable (a function) in all cases, even when the function is constrained to integral normalization, thus blocking conventional formulations of the functional derivative.
In the case of general functionals, whose domains are free of conditions of normalizations, conventional functional derivatives coincide with parametric derivatives and Equation (0.6) remains valid.In the presence of externally imposed conditions, such as fixed normalization, for example, conventional functional differentiation becomes inapplicable whereas rates of change can still be readily evaluated through parametric differentiation.
This feature is particularly useful when a functional is known to exhibit an extremum (e.g., a minimum).Then, the rate of change at the minimum vanishes, while that at any other point is non-zero (signifying that the value of the functional is higher than that at its minimum, for example).
The paper provides a rigorous, exact procedure for the functional differentiation of expectation values of operators in quantum mechanics with respect to wave functions or the densities obtained from the wave functions entering the determination of the expectation value in question.Emphasis is placed on derivatives with respect to the density, especially since wave functions generally are not written explicitly in terms of the density.In this case, the dependence of the wave function on coordinates can be mapped exactly onto that of the density by means of an expansion in an orthonormal and complete basis (the equidensity basis at the density of the discussion) [12], allowing parametric differentiation to proceed unimpeded.
Finally, a comment on the absence of a Taylor-series-like expression that connects expectation values at one density to those at another, possibly nearby density.There are both mathematical as well as fundamental, physical reasons for this absence.
First, as mentioned in the text, a density defines a unique functional as the set of all antisymmetric wave functions that lead to the density.There is, however, no one-to-one correspondence between the elements of the sets defined by two different densities, and no functional of the form, ( Second, recall that a density may correspond to the ground state of a many-particle system and thus belong to the Hilbert space determined by the potential acting on, and number of particles of, the system.This Hilbert space is separate and disjoint from that of any other system that is independent of (non-interacting with) the one in question.Consequently, there exists no connection between the spaces provided by the properties (quantum states or functional derivatives) of either system separately.an expression that is intended to make clear that ( ) f δ ′ r is an infinitesimal change applied to point ′ r , and can be associated with a change of any function, rather than the difference between two functions at ′ r .The Dirac delta function is an example of a distribution, or generalized function [5].It can also be viewed as a response function, or susceptibility, encoding the relative rate of change of a function at r induced through a change in the same function at ′ r .From this viewpoint, it is seen that the rate of change of a function with respect to itself is an inherent property of the function and independent of the existence of other functions.
the identity functional can be viewed as the mapping of the values of the function, ( ) f r , onto itself, by means of the relation, of the last expression is useful: We view the quantity, the (infinitesimal) volume d ′ r , producing the response, ( )

Figure 1 .
Figure 1.Schematic representation of functionals over distributions.The axis, D f , contains the domain of a functional consisting of ordinary functions in three-dimensional space, the axis labeled, [ ] F f , encodes the values of the functionals (in a schematic representation), and the third axis labels continuously the amount of a distribution (the Dirac delta function) associated with any element of D f .The "plane" defined by D f and ω is a generalization of the domain, D f , that in- cludes distributions.
another density for arbitrarily chosen test function, ( ) φ r . (For example, with ( ) ( ) -particle Green function in the presence of the potential.The Green function can also be written in the form of a resolvent, the boundary], provided that µ is small enough.
is a single wave function, can be established.In this case, the very concept of Taylor series connection between different wave function corresponding to different densities becomes moot.
r r , from which follows that the Dirac delta function is its own inverse.[7]Trott, M. Functional Derivative.From MathWorld-A Wolfram Web Resource, Created by Eric W. Weisstein.http://mathworld.wolfram.com/FunctionalDerivative.html [3]embles, or mixed states, are described by a density matrix[3],