Approach to a Proof of the Riemann Hypothesis by the Second Mean-Value Theorem of Calculus

By the second mean-value theorem of calculus (Gauss-Bonnet theorem) we prove that the class of functions ${\mit \Xi}(z)$ with an integral representation of the form $\int_{0}^{+\infty}du\,{\mit \Omega}(u)\,{\rm ch}(uz)$ with a real-valued function ${\mit \Omega}(u) \ge 0$ which is non-increasing and decreases in infinity more rapidly than any exponential functions $\exp\left(-\lambda u\right),\,\lambda>0$ possesses zeros only on the imaginary axis. The Riemann zeta function $\zeta(s)$ as it is known can be related to an entire function $\xi(s)$ with the same non-trivial zeros as $\zeta(s)$. Then after a trivial argument displacement $s\leftrightarrow z=s-\frac{1}{2}$ we relate it to a function ${\mit \Xi}(z)$ with a representation of the form ${\mit \Xi}(z)=\int_{0}^{+\infty}du\,{\mit \Omega}(u)\,{\rm ch}(uz)$ where ${\mit \Omega}(u)$ is rapidly decreasing in infinity and satisfies all requirements necessary for the given proof of the position of its zeros on the imaginary axis $z={\rm i} y$ by the second mean-value theorem. Besides this theorem we apply the Cauchy-Riemann differential equation in an integrated operator form derived in the Appendix B. All this means that we prove a theorem for zeros of ${\mit \Xi}(z)$ on the imaginary axis $z={\rm i} y$ for a whole class of function ${\mit \Omega}(u)$ which includes in this way the proof of the Riemann hypothesis. This whole class includes, in particular, the modified Bessel functions ${\rm I}_{\nu}(z)$ for which it is known that their zeros lie on the imaginary axis and which affirms our conclusions. A class of almost-periodic functions to piece-wise constant nonincreasing functions ${\rm \Omega}(u)$ belongs also to this case. At the end we give shortly an equivalent way of a more formal description of the obtained results using the Mellin transform of functions with its variable substituted by an operator.


Introduction
The Riemann zeta function ζ(s) which basically was known already to Euler establishes the most important link between number theory and analysis. The proof of the Riemann hypothesis is a longstanding problem since it was formulated by Riemann [1] in 1859. The Riemann hypothesis is the conjecture that all nontrivial zeros of the Riemann zeta function ζ(s) for complex s = σ +it are positioned on the line s = 1 2 +it that means on the line parallel to the imaginary axis through real value σ = 1 2 in the complex plane and in extension that all zeros are simple zeros [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] (with extensive lists of references in some of the cited sources, e.g., [4,5,9,12,14]). The book of Edwards [5] is one of the best older sources concerning most problems connected with the Riemann zeta function. There are also mathematical tables and chapters in works about Special functions which contain information about the Riemann zeta function and about number analysis, e.g., Whittaker and Watson [2] (chap. 13), Bateman and Erdélyi [18] (chap. 1) about zeta functions, [19] (chap. 17) about number analysis, and Apostol [20,21] (chaps. 25 and 27). The book of Borwein, Choi, Rooney and Weirathmueller [12] gives on the first 90 pages a short account about achievements concerning the Riemann hypothesis and its consequences for number theory and on the following about 400 pages it reprints important original papers and expert witnesses in the field. Riemann has put aside the search for a proof of his hypothesis 'after some fleeting vain attempts' and emphasizes that 'it is not necessary for the immediate objections of his investigations' [1] (see [5]). The Riemann hypothesis was taken by Hilbert as the 8-th problem in his representation of 23 fundamental unsolved problems in pure mathematics and axiomatic physics in a lecture hold on 8 August in 1900 at the Second Congress of Mathematicians in Paris [22,23]. The vast experience with the Riemann zeta function in the past and the progress in numerical calculations of the zeros (see, e.g., [5,10,11,16,17,24,25]) which all confirmed the Riemann hypothesis suggest that it should be true corresponding to the opinion of most of the specialists in this field but not of all specialists (arguments for doubt are discussed in [26]).
The Riemann hypothesis is very important for prime number theory and a number of consequences is derived under the unproven assumption that it is true. As already said a main role plays a function ζ(s) which was known already to Euler for real variables s in its product representation (Euler product) and in its series representation and was continued to the whole complex s-plane by Riemann and is now called Riemann zeta function. The Riemann hypothesis as said is the conjecture that all nontrivial zeros of the zeta function ζ(s) lie on the axis parallel to the imaginary axis and intersecting the real axis at s = 1 2 . For the true hypothesis the representation of the Riemann zeta function after exclusion of its only singularity at s = 1 and of the trivial zeros at s = −2n, (n = 1, 2, . . .) on the negative real axis is possible by a Weierstrass product with factors which only vanish on the critical line σ = 1 2 . The function which is best suited for this purpose is the so-called xi function ξ(s) which is closely related to the zeta function ζ(s) and which was also introduced by Riemann [1]. It contains all information about the nontrivial zeros and only the exact positions of the zeros on this line are not yet given then by a closed formula which, likely, is hardly to find explicitly but an approximation for its density was conjectured already by Riemann [1] and proved by von Mangoldt [27]. The "(pseudo)-random" character of this distribution of zeros on the critical line remembers somehow the "(pseudo)-random" character of the distribution of primes where one of the differences is that the distribution of primes within the natural numbers becomes less dense with increasing integers whereas the distributions of zeros of the zeta function on the critical line becomes more dense with higher absolute values with slow increase and approaches to a logarithmic function in infinity.
There are new ideas for analogies to and application of the Riemann zeta function in other regions of mathematics and physics. One direction is the theory of random matrices [16,24] which shows analogies in their eigenvalues to the distribution of the nontrivial zeros of the Riemann zeta function. Another interesting idea founded by Voronin [28] (see also [16,29]) is the universality of this function in the sense that each holomorphic function without zeros and poles in a certain circle with radius less 1 2 can be approximated with arbitrary required accurateness in a small domains of the zeta function to the right of the critical line within 1 2 ≤ s ≤ 1. An interesting idea is elaborated in articles of Neuberger, Feiler, Maier and Schleich [30,31]. They consider a simple first-order ordinary differential equation with a real variable t (say the time) for given arbitrary analytic functions f (z) where the time evolution of the function for every point z finally transforms the function in one of the zeros f (z) = 0 of this function in the complex z-plane and illustrate this process graphically by flow curves which they call Newton flow and which show in addition to the zeros the separatrices of the regions of attraction to the zeros. Among many other functions they apply this to the Riemann zeta function ζ(z) in different domains of the complex plane. Whether, however, this may lead also to a proof of the Riemann hypothesis is more than questionable.
Number analysis defines some functions of a continuous variable, for example, the number of primes π(x) less a given real number x which last is connected with the discrete prime number distribution (e.g., [3,4,5,7,9,11]) and establishes the connection to the Riemann zeta function ζ(s). Apart from the product representation of the Riemann zeta function the representation by a type of series which is now called Dirichlet series was already known to Euler. With these Dirichlet series in number theory are connected some discrete functions over the positive integers n = 1, 2, . . . which play a role as coefficients in these series and are called arithmetic functions (see, e.g., Chandrasekharan [4] and Apostol [13]). Such functions are the Möbius function µ(n) and the Mangoldt function Λ(n) as the best known ones. A short representation of the connection of the Riemann zeta function to number analysis and of some of the functions defined there became now standard in many monographs about complex analysis (e.g., [15]).
Our means for the proof of the Riemann hypothesis in present article are more conventional and "oldfashioned" ones, i.e. the Real Analysis and the Theory of Complex Functions which were developed already for a long time. The most promising way for a proof of the Riemann hypothesis as it seemed to us in past is via the already mentioned entire function ξ(s) which is closely related to the Riemann zeta function ζ(s). It contains all important elements and information of the last but excludes its trivial zeros and its only singularity and, moreover, possesses remarkable symmetries which facilitate the work with it compared with the Riemann zeta function. This function ξ(s) was already introduced by Riemann [1] and dealt with, for example, in the classical books of Titchmarsh [3], Edwards [5] and in almost all of the sources cited at the beginning. Present article is mainly concerned with this xi function ξ(s) and its investigation in which, for convenience, we displace the imaginary axis by 1 2 to the right that means to the critical line and call this Xi function Ξ(z) with z = x + iy. We derive some representations for it among them novel ones and discuss its properties, including its derivatives, its specialization to the critical line and some other features. We make an approach to this function via the second mean value theorem of analysis (Gauss-Bonnet theorem, e.g., [37,38]) and then we apply an operator identity for analytic functions which is derived in Appendix B and which is equivalent to a somehow integrated form of the Cauchy-Riemann equations. This among other not so successful trials (e.g., via moments of function Ω(u)) led us finally to a proof of the Riemann hypothesis embedded into a proof for a more general class of functions.
Our approach to a proof of the Riemann hypothesis in article in rough steps is as follows: First we shortly represent the transition from the Riemann zeta function ζ(s) of complex variable s = σ + it to the xi function ξ(s) introduced already by Riemann and derive for it by means of the Poisson summation formula a representation which is convergent in the whole complex plane (Section 2 with main formal part in Appendix A). Then we displace the imaginary axis of variable s to the critical line at s = 1 2 + it by s → z = s − 1 2 that is purely for convenience of further working with the formulae. However, this has also the desired subsidiary effect that it brings us into the fairway of the complex analysis usually represented with the complex variable z = x + iy. The transformed ξ(s) function is called Ξ(z) function.
The function Ξ(z) is represented as an integral transform of a real-valued function Ω(u) of the real variable u in the form Ξ(z) = +∞ 0 du Ω(u) ch(uz) which is related to a Fourier transform (more exactly to Cosine Fourier transform). If the Riemann hypothesis is true then we have to prove that all zeros of the function Ξ(z) occur for x = 0.
To the Xi function in mentioned integral transform we apply the second mean-value theorem of real analysis first on the imaginary axes and discuss then its extension from the imaginary axis to the whole complex plane. For this purpose we derive in Appendix B in operator form relations of the complex meanvalue parameter in our application of the second mean-value theorem to this parameter on the imaginary axis which are equivalents in integral form to the Cauchy-Riemann equations in differential form and apply this in specific form to the Xi function (Sections 3 and 4).
Then in Section 5 we accomplish the proof with the discussion and solution of the two most important equations (5.10) and (5.11) for the last and decisive stage of the proof. These two equations are derived before in preparation of this last stage of the proof. From these equations it is seen that the obtained two real equations admit zeros of the Xi function only on the imaginary axis. This proves the Riemann hypothesis by the equivalence of the Riemann zeta function ζ(s) to the Xi function Ξ(z) and embeds it into a whole class of functions with similar properties and positions of their zeros.
The Sections 6-7 serve for illustrations and graphical representations of the specific parameters (e.g., mean-value parameters) for the Xi function to the Riemann hypothesis and for other functions which in our proof by the second mean-value problem are included for the existence of zeros only on the imaginary axis. This is in particular the whole class of modified Bessel functions I ν (z), − 1 2 < ν < +∞ with real indices ν which possess zeros only on the imaginary axis y and where a proof by means of the differential equations exists. and establish some of the basic representations of these functions, in particular, a kind of modified Cosine Fourier transformations of a function Ω(u) to the function Ξ(z).
As already expressed in the Introduction, the most promising way for a proof of the Riemann hypothesis as it seems to us is the way via a certain integral representation of the related xi function ξ(s). We sketch here the transition from the Riemann zeta function ζ(s) to the related xi function ξ(s) in a short way because, in principle, it is known and we delegate some aspects of the derivations to Appendix A Usually, the starting point for the introduction of the Riemann zeta function ζ(s) is the following relation between the Euler product and an infinite series continued to the whole complex s-plane where p n denotes the ordered sequence of primes (p 1 = 2, p 2 = 3, p 3 = 5, . . .). The transition from the product formula to the sum representation in ( in powers of 1 p s n using the uniqueness of the prime-number decomposition is well known and due to Euler in 1737. It leads to a special case of a kind of series later introduced and investigated in more general form and called Dirichlet series. The Riemann zeta function ζ(s) can be analytically continued into the whole complex plane to a meromorphic function that was made and used by Riemann. The sum in (2.1) converges uniformly for complex variable s = σ + it in the open semi-planes with arbitrary σ > 1 and arbitrary t. The only singularity of the function ζ(s) is a simple pole at s = 1 with residue 1 that we discuss below.
The product form (2.1) of the zeta function ζ(s) shows that it involves all prime numbers p n exactly one times and therefore it contains information about them in a coded form. It proves to be possible to regain information about the prime number distribution from this function. For many purposes it is easier to work with meromorphic and, moreover, entire functions than with infinite sequences of numbers but in first case one has to know the properties of these functions which are determined by their zeros and their singularities together with their multiplicity.
From the well-known integral representation of the Gamma function follows by the substitutions t = n µ x, µz = s with an appropriately fixed parameter µ > 0 for arbitrary natural numbers n Inserting this into the sum representation (2.1) and changing the order of summation and integration, we obtain for choice µ = 1 of the parameter using the sum evaluation of the geometric series and for choice µ = 2 with substitution p = πq 2 of the integration variable (see [1] and, e.g., [3,4,5,7,9]) Other choice of µ seems to be of lesser importance. Both representations (2.4) and (2.5) are closely related to a Mellin transformf (s) of a function f (t) which together with its inversion is generally defined by (e.g., [15,32,33,34,35]) where c is an arbitrary real value within the convergence strip off (s) in complex s-plane. The Mellin transformf (s) of a function f (t) is closely related to the Fourier transformφ(y) of the function ϕ(x) ≡ f (e x ) by variable substitution t = e x and y = is. Thus the Riemann zeta function ζ(s) can be represented, substantially (i.e., up to factors depending on s), as the Mellin transforms of the functions respectively. The kernels of the Mellin transform are the eigenfunctions of the differential operator t ∂ ∂t to eigenvalue s − 1 or, correspondingly, of the integral operator exp α t ∂ ∂t of the multiplication of the argument of a function by a factor e α (scaling of argument). Both representations (2.4) and (2.5) can be used for the derivation of further representations of the Riemann zeta function and for the analytic continuation. The analytic continuation of the Riemann zeta function can also be obtained using the Euler-Maclaurin summation formula for the series in (2.1) (e.g., [5,11,15]).
Using the Poisson summation formula, one can transform the representation (2.5) of the Riemann zeta function to the following form This is known [1,3,4,5,7,9] but for convenience and due to the importance of this representation for our purpose we give a derivation in Appendix A. From (2.7) which is now already true for arbitrary complex s and, therefore, is an analytic continuation of the representations (2.1) or (2.5) we see that the Riemann zeta function satisfies a functional equation for the transformation of the argument s → 1 − s. In simplest form it appears by 'renormalizing' this function via introduction of the xi function ξ(s) defined by Riemann according to [1] and to [5,20] and we obtain for it the following representation converging in the whole complex plane of s (e.g., [1,4,5,9,7]) with the 'normalization' (2.10) For s = 1 2 the xi function and the zeta function possess the (likely transcendental) values Contrary to the Riemann zeta function ζ(s) the function ξ(s) is an entire function. The only singularity of ζ(s) which is the simple pole at s = 1, is removed by multiplication of ζ(s) with s − 1 in the definition 1 Riemann [1] defines it more specially for argument s = 1 2 + it and writes it ξ(t) with real t corresponding to our ξ 1 2 + it . Our definition agrees, e.g., with Eq. (1) in Section 1.8 on p. 16 of Edwards [5] and with [20] and many others. from which follows for the n-th derivatives and which expresses that ξ(s) is a symmetric function with respect to s = 1 2 as it is immediately seen from (2.9) and as it was first derived by Riemann [1]. It can be easily converted into the following functional equation for the Riemann zeta function ζ(s) 2 (2.14) Together with ξ (s) = 6 (ξ (s * )) * we find by combination with (2.12) that combine in simple way, function values for 4 points (s, 1 − s, 1 − s * , s * ) of the complex plane. Relation (2.15) means that in contrast to the function ζ(s) which is only real-valued on the real axis the function ξ(s) becomes real-valued on the real axis (s = s * ) and on the imaginary axis (s = −s * ). As a consequence of absent zeros of the Riemann zeta function ζ(σ + it) for σ ≡ Re(s) > 1 together with the functional relation (2.14) follows that all nontrivial zeros of this function have to be within the strip 0 ≤ σ ≤ 1 and the Riemann hypothesis asserts that all zeros of the related xi function ξ(s) are positioned on the so-called critical line s = 1 2 + it, (−∞ < t < +∞). This is, in principle, well known. We use the functional equation (2.12) for a simplification of the notations in the following considerations and displace the imaginary axis of the complex variable s = σ + it from σ = 0 to the value σ = 1 2 by introducing the entire function Ξ(z) of the complex variable z = x + iy as follows  19) and taken together with the symmetry for the transition to complex conjugated variable This means that the Xi function Ξ (z) becomes real-valued on the imaginary axis z = i y which becomes the critical line in the new variable z Ξ (iy) = Ξ (−iy) = (Ξ (iy)) * = (Ξ (−iy)) * .
(2.21) Furthermore, the function Ξ(z) becomes a symmetrical function and a real-valued one on the real axis z = x In contrast to this the Riemann zeta function ζ(s) the function is not a real-valued function on the critical line s = 1 2 + it and is real-valued but not symmetric on the real axis. This is represented in Fig. 2.1. (calculated with "Mathematica 6" such as the further figures too). We see that not all of the zeros of the real part Re ζ 1 2 + it are also zeros of the imaginary part Im ζ 1 2 + it and, vice versa, that not all of the zeros of the imaginary part are also zeros of the real part and thus genuine zeros of the function ζ 1 2 + it which are signified by grid lines. Between two zeros of the real part which are genuine zeros of ζ 1 2 + it lies in each case (exception first interval) an additional zero of the imaginary part, which almost coincides with a maximum of the real part.
Using (2.9) and definition (2.16) we find the following representation of Ξ(z) With the substitution of the integration variable q = e u (see also (A.10) in Appendix A) representation (2.23) is transformed to In Appendix A we show that (2.24) can be represented as follows (see also Eq. (2) on p. 17 in [5] which possesses a similar principal form) with the following explicit form of the function Ω (u) of the real variable u πn 2 e 2u 2πn 2 e 2u − 3 exp −πn 2 e 2u > 0, (−∞ < u < +∞). (2.26) The function Ω (u) is symmetric that means it is an even function although this is not immediately seen from representation (2.26) 3 . We prove this in Appendix B. Due to this symmetry, formula (2.25) can be also represented by In the formulation of the right-hand side the function Ξ (z) appears as analytic continuation of the Fourier transform of the function Ω (u) written with imaginary argument z = iy or, more generally, with substitution z → iz and complex z . From this follows as inversion of the integral transformation (2.28) using (2.27) 29) or due to symmetry of the integrand in analogy to (2.25) where Ξ (iy) is a real-valued function of the variable y on the imaginary axis due to (2.25). A graphical representation of the function Ω(u) and of its first derivatives Ω (1) (u), (n = 1, 2, 3) is given in Fig. 2.2. The function Ω (u) is monotonically decreasing for 0 ≤ u < +∞ due to the non-positivity of its first derivative Ω (1) (u) ≡ ∂Ω(u) ∂u which explicitly is (see also Appendix A) The function Ω(u) is positive for 0 ≤ u < +∞ and since its first derivative Ω (1) (u) is negative for 0 < u < +∞ the function Ω(u) is monotonically decreasing on the real positive axis. It vanishes in infinity more rapidly than any exponential function with a polynomial in the exponent.
with one relative minimum at u min = 0.237266 of depth Ω (1) (u min ) = −4.92176. Moreover, it is very important for the following that due to presence of factors exp −πn 2 e 2u in the sum terms in (2.26) or in (2.32) the functions Ω(u) and Ω (1) (u) and all their higher derivatives are very rapidly decreasing for u → +∞, more rapidly than any exponential function with a polynomial of u in the argument. In this sense the function Ω(u) is more comparable with functions of finite support which vanish from a certain u ≥ u 0 on than with any exponentially decreasing function. From (2.27) follows immediately that the function Ω (1) (u) is antisymmetric that means it is an odd function. It is known that smoothness and rapidness of decreasing in infinity of a function change their role in Fourier transformations. As the Fourier transform of the smooth (infinitely continuously differentiable) function Ω(u) the Xi function on the critical line Ξ(iy) is rapidly decreasing in infinity. Therefore it is not easy to represent the real-valued function Ξ(iy) with its rapid oscillations under the envelope of rapid decrease for increasing variable y graphically in a large region of this variable y. An appropriate real amplification envelope is seen from (2.18) to be α(y) = that due to antisymmetry of Ω (1) (u) and sh (uz) with respect to u → −u can also be written  The envelope over the oscillations of the real-valued function Ξ(iy) decreases extremely rapidly with increase of the variable y in the shown intervals. This behavior makes it difficult to represent this function graphically for large intervals of the variable y. By an enhancement factor which rises the amplitude to the level of the zeta function ζ(s) we may see the oscillations under the envelope (last partial picture). A similar picture one obtains for the modulus of the Riemann zeta function ζ 1 2 + iy only with our negative parts folded to the positive side of the abscissa, i.e. |Ξ(iy)| = ζ 1 2 + iy (see also Fig. 2.1 (last partial picture)). The given values for the zeros at 1 2 ± iyn were first calculated by J.-P. Gram in 1903 up to y 15 [5]. We emphasize here that the shown very rapid decrease of the Xi function at the beginning of y and for y → ±∞ is due to the 'very high' smoothness of Ω(u) for arbitrary u. Figure 2.2 gives a graphical representation of the function Ω (u) and of its first derivative Ω (1) (u) ≡ ∂Ω ∂u (u) which due to rapid convergence of the sums is easily to generate by computer. One can express Ξ (z) also by higher derivatives Ω (n) (u) ≡ ∂ n Ω ∂u n (u) of the Omega function Ω(u) according to with the symmetries of the derivatives of the function Ω(u) for u ↔ −u This can be seen by successive partial integrations in (2.25) together with complete induction. The functions Ω (n) (u) in these integral transformations are for n ≥ 1 not monotonic functions.
We mention yet another representation of the function Ξ(z). Using the transformations the function Ξ(z) according to (2.28) with the explicit representation of the function Ω(u) in (2.26) can now be represented in the form where Γ(α, x) denotes the incomplete Gamma function defined by (e.g., [18,21,36] However, we did not see a way to prove the Riemann hypothesis via the representation (2.39). The Riemann hypothesis for the zeta function ζ(s = σ + it) is now equivalent to the hypothesis that all zeros of the related entire function Ξ (z = x + iy) lie on the imaginary axis z = iy that means on the line to real part x = 0 of z = x + iy which becomes now the critical line. Since the zeta function ζ(s) does not possess zeros in the convergence region σ > 1 of the Euler product (2.1) and due to symmetries (2.27) and (2.31) it is only necessary to prove that Ξ(z) does not possess zeros within the strips − 1 2 ≤ x < 0 and 0 < x ≤ + 1 2 to both sides of the imaginary axis z = iy where for symmetry the proof for one of these strips would be already sufficient. However, we will go another way where the restriction to these strips does not play a role for the proof.

Application of second mean-value theorem of calculus to Xi function
After having accepted the basic integral representation (2.25) of the entire function Ξ(z) according to with the function Ω(u) explicitly given in (2.26) we concentrate us on its further treatment. However, we do this not with this specialization for the real-valued function Ω(u) but with more general suppositions for it. Expressed by real part U (x, y) and imaginary part V (x, y) of Ξ(z) we find from (3.1) We suppose now as necessary requirement for Ω(u) and satisfied in the special case (2. 26) Furthermore, Ξ(z) should be an entire function that requires that the integral (3.1) is finite for arbitrary complex z and therefore that Ω(u) is rapidly decreasing in infinity, more precisely for arbitrary λ ≥ 0. This means that the function Ω(u) should be a nonsingular function which is rapidly decreasing in infinity, more rapidly than any exponential function e −λu with arbitrary λ > 0. Clearly, this is satisfied for the special function Ω(u) in (2.26).
Our conjecture for a longer time was that all zeros of Ξ(z) lie on the imaginary axis z = iy for a large class of functions Ω(u) and that this is not very specific for the special function Ω(u) given in (2.26) but is true for a much larger class. It seems that to this class belong all non-increasing functions Ω(u), i.e such functions for which holds Ω (1) (u) ≤ 0 for its first derivative and which rapidly decrease in infinity. This means that they vanish more rapidly in infinity than any power functions |u| −n , (n = 1, 2, . . .) (practically they vanish exponentially). However, for the convergence of the integral (3.1) in the whole complex z-plane it is necessary that the functions have to decrease in infinity also more rapidly than any exponential function exp(−λu) with arbitrary λ > 0 expressed in (3.5). In particular, to this class belong all rapidly decreasing functions Ω(u) which vanish from a certain u ≥ u 0 on and which may be called non-increasing finite functions (or functions with compact support). On the other side, continuity of its derivatives Ω (n) (u), (n = 1, 2, . . .) is not required. The modified Bessel functions I ν (z) 'normalized' to the form of entire functions 2 z ν I ν (z) for ν ≥ 1 2 possess a representation of the form (3.1) with a function Ω(u) which vanishes from u ≥ 1 on but a number of derivatives of Ω(u) for the functions is not continuous at u = 1 depending on the index ν. It is valuable that here an independent proof of the property that all zeros of the modified Bessel functions I ν (u) lie on the imaginary axis can be made using their differential equations via duality relations. We intend to present this in detail in a later work.
Furthermore, to the considered class belong all monotonically decreasing functions with the described rapid decrease in infinity. The fine difference of the decreasing functions to the non-increasing functions Ω(u) is that in first case the function Ω(u) cannot stay on the same level in a certain interval that means we have Ω (1) (u) < 0 for all points u > 0 instead of Ω (1) (u) ≤ 0 only. A function which decreases not faster than e −λu in infinity does not fall into this category as, for example, the function sech(z) ≡ 1 ch(z) shows. On the other side, also some simply calculable discrete superpositions such as a 1 ch(u) + a 2 ch(2u) or a 1 ch(z) + a 3 ch(3z) as function Ξ(z) with positive amplitudes a n do not provide a counterexample that the zeros lie outside the imaginary axis but show that if the amplitudes a n do not possess a definite sign then they may possess zeros outside the imaginary axis.
To apply the second mean-value theorem it is necessary to restrict us to a class of functions Ω(u) → f (u) which are non-increasing that means for which for all u 1 < u 2 in considered interval holds 6) or equivalently in more compact form In case of f (1) (u) = 0 for certain u the next higher non-vanishing derivative should be negative. The monotonically decreasing functions in the interval a ≤ u ≤ b, in particular, belong to the class of nonincreasing functions with the fine difference that here is a continuous function in the interval a ≤ u ≤ b the second mean-value theorem (often called theorem of Bonnet (1867) or Gauss-Bonnet theorem) states an equivalence for the following integral on the left-hand side to the expression on the right-hand side according to (see some monographs about calculus or real analysis; we recommend the monographs of Courant [37] (Appendix to chap IV) and of Widder [38] who called it Weierstrass form of Bonnet's theorem (chap. 5, where u 0 is a certain value within the interval boundaries a < b which as a rule we do not exactly know. It holds also for non-decreasing functions which include the monotonically increasing functions as special class in analogous way. The proof of the second mean-value theorem is comparatively simple by applying a substitution in the (first) mean-value theorem of integral calculus [37,38].
Applied to our function f (u) = Ω(u) which in addition should rapidly decrease in infinity according to (3.5) this means in connection with monotonic decrease that it has to be positively semi-definite if Ω(0) > 0 and therefore (3.10) and the theorem (3.9) takes on the form where the extension to an upper boundary b → +∞ in (3.9) for f (+∞) = 0 and in case of existence of the integral is unproblematic. If we insert in (3.9) for g(u) the function ch(uz) which apart from the real variable u depends in parametrical way on the complex variable z and is an analytic function of z we find that u 0 depends on this complex parameter also in an analytic way as follows is an entire function with u 0 (x, y) its real and v 0 (x, y) its imaginary part. The condition for zeros z = 0 is that sh (w 0 (z)z) z vanishes that leads to 13) or split in real and imaginary part for the real part and for the imaginary part. The multi-valuedness of the mean-value functions in the conditions (3.13) or (3.15) is an interesting phenomenon which is connected with the periodicity of the function g(u) = ch(uz) on the imaginary axis z = iy in our application (3.12) of the second mean-value theorem (3.11). To our knowledge this is up to now not well studied. We come back to this in the next Sections 4 and, in particular, Section 7 brings some illustrative clarity when we represent the mean-value functions graphically. At present we will say only that we can choose an arbitrary n in (3.15) which provides us the whole spectrum of zeros z 1 , z 2 , . . . on the upper half-plane and the corresponding spectrum of zeros z −1 = −z 1 , z −2 = −z 2 , . . . on the lower half-plane of C which as will be later seen lie all on the imaginary axis. Since in computer calculations the values of the Arcus Sine function are provided in the region from − π 2 to + π 2 it is convenient to choose n = 0 but all other values of n in (3.15) lead to equivalent results.
One may represent the conditions (3.14) and (3.15) also in the following equivalent form from which follows All these forms (3.14)-(3.17) are implicit equations with two variables (x, y) which cannot be resolved with respect to one variable (e.g., in forms y = y k (x) for each fixed n and branches k) and do not provide immediately the necessary conditions for zeros in explicit form but we can check that (3.16) satisfies the Cauchy-Riemann equations as a minimum requirement We have to establish now closer relations between real and imaginary part u 0 (x, y) and v 0 (x, y) of the complex mean-value parameter w 0 (z = x + iy). The first step in preparation to this aim is the consideration of the derived conditions on the imaginary axis.

Specialization of second mean-value theorem to Xi function on imaginary axis
By restriction to the real axis y = 0 we find from (3.3) for the function Ξ(z) with the following two possible representations of U (x, 0) related by partial integration The inequality U (x, 0) > 0 follows according to the supposition Ω(u) ≥ 0, Ω(0) > 0 from the non-negativity of the integrand that means from Ω(u) ch(ux) ≥ 0. Therefore, the case y = 0 can be excluded from the beginning in the further considerations for zeros of U (x, y) and V (x, y).
We now restrict us to the imaginary axis x = 0 and find from which as it is easily seen does not depend on the sign of y. Therefore we have two non-negative parameters, the zeroth moment Ω 0 and the value Ω(0), which according to (4.6) and (4.8) restrict the range of values of U (0, y) to an interior range both to (4.6) and to (4.8) at once. For mentioned purpose we now consider the restriction of the mean-value parameter w 0 (z) to the imaginary axis z = iy for which g(u) = ch(u(iy)) = cos(uy) is a real-valued function of y. For arbitrary fixed y we find by the second mean-value theorem a parameter u 0 in the interval 0 ≤ y < +∞ which naturally depends on the chosen value y that means u 0 = u 0 (0, y). The extension from the imaginary axis z = iy to the whole complex plane C can be made then using methods of complex analysis. We discuss some formal approaches to this in Appendix B. Now we apply (3.12) to the imaginary axis z = iy.
The second mean-value theorem (3.12) on the imaginary axis z = iy (or x = 0) takes on the form = Ω(0) sin (u 0 (0, y)y) y , (u 0 (0, y) = 0, v 0 (0, y) = 0) . (4.9) As already said since the left-hand side is a real-valued function the right-hand side has also to be real-valued and the parameter function w 0 (iy) is real-valued and therefore it can only be the real part u 0 (0, y) of the complex function w 0 (z = x + iy) = u 0 (x, y) + iv 0 (x, y) for x = 0. The second mean-value theorem states that u 0 (0, y) lies between the minimal and maximal values of the integration borders that is here between 0 and +∞ and this means that u 0 (0, y) should be positive. Here arises a problem which is connected with the periodicity of the function g(u) = cos(uy) as function of the variable u for fixed variable y in the application of the mean-value theorem. Let us first consider the special case y = 0 in (4.9) which leads to From this relation follows u 0 ≡ u 0 (0, 0) > 0 and it seems that all is correct also with the continuation to u 0 (0, y) > 0 for arbitrary y. One may even give the approximate values Ω(0) ≈ 1.78679 and u 0 ≈ 0.27822 and therefore Ω 0 ≡ Ω(0)u 0 ≈ 0.49712 which, however, are not of importance for the later proofs. If we now start from u 0 (0, 0) > 0 and continue it continuously to u 0 (0, y) then we see that u 0 (0, y) goes monotonically to zero and approaches zero approximately at y = y 1 ≈ 14.135 that is at the first zero of the function Ξ(iy) on the positive imaginary axis and goes then first beyond zero and oscillates then with decreasing amplitude for increasing y around the value zero with intersecting it exactly at the zeros of Ξ(iy). We try to illustrate this graphically in Section 7. All zeros lie then on the branch u 0 (0, y)y = nπ with n = 0. That u 0 (0, y) goes beyond zero seems to contradict the content of the second mean-value theorem according which u 0 (0, y) has to be positive in our application. Here comes into play the multi-valuedness of the mean-value function u 0 (0, y). For the zeros of sin(u 0 (0, y)y) in (4.9) the relations u 0 (0, y)y = nπ with different integers n are equivalent and one may find to values u 0 (0, y) < 0 equivalent curves u 0 (n; 0, y) with u 0 (n; 0, y) > 0 and all these curves begin with u 0 (n = 0; 0, 0) → ∞ for y → 0. However, we cannot continue u 0 (0, 0) in continuous way to only positive values for u 0 (0, y).
For |y| → ∞ the inequality (4.8) is stronger than (4.6) and characterizes the restrictions of U (0, y) and via the equivalence U (0, y)y = Ω(0) sin(u 0 (0, y)y) follows from (4.8) where the choice of n determines a basis interval of the involved multi-valued function arcsin(z) and the inequality says that it is in every case possible to choose it from the same interval of length π. The zeros y k of the Xi function Ξ(x + iy) on the imaginary axis x = 0 (critical line) are determined alone by the (multi-valued) function u 0 (0, y) whereas v 0 (0, y) vanishes automatically on the imaginary axis in considered special case and does not add a second condition. Therefore, the zeros are the solutions of the conditions u 0 (0, y)y = nπ, (n = 0, ±1, ±2, . . .), (v 0 (0, y) = 0). (4.12) It is, in general, not possible to obtain the zeros y k on the critical line exactly from the mean-value function u 0 (0, y) in (4.9) since generally we do not possess it explicitly. In special cases the function u 0 (0, y) can be calculated explicitly that is the case, for example, for all (modified) Bessel functions 2 z ν I ν (z). The most simple case among these is the case ν = 1 2 when the corresponding function Ω(u) is a step function where θ(x) = 0, x < 0 1, x > 0 is the Heaviside step function. In this case follows where Ω(0)u 0 = +∞ 0 du Ω(u) is the area under the function Ω(u) = Ω(0)θ(u 0 − u) (or the zeroth-order moment of this function. For the squared modulus of the function Ξ(z) we find from which, in particular, it is easy to see that this special function Ξ(x + iy) possesses zeros only on the imaginary axis z = iy or x = 0 and that they are determined by u 0 y n = nπ, ⇒ y n = nπ u 0 , (n = ±1, ±2, . . .). (4.16) The zeros on the imaginary axis are here equidistant but the solution y 0 = 0 is absent since then also the denominators in (4.15) are vanishing. The parameter w 0 (z) in the second mean-value theorem is here a real constant u 0 in the whole complex plane Practically, the second mean-value theorem compares the result for an arbitrary function Ω(u) under the given restrictions with that for a step function Ω(u) = Ω(0) θ(u 0 − u) by preserving the value Ω(0) and making the parameter u 0 depending on z in the whole complex plane. Without discussing now quantitative relations the formulae (4.17) suggest that v 0 (x, y) will stay a 'small' function compared with u 0 (x, y) in the neighborhood of the imaginary axis (i.e. for |x| |y|) in a certain sense. We will see in next Section that the function u 0 (0, y) taking into account v 0 (0, y) = 0 determines the functions u 0 (x, y) and v 0 (x, y) and thus w 0 (z) in the whole complex plane via the Cauchy-Riemann equations in an operational approach that means in an integrated form which we did not found up to now in literature. The general formal part is again delegated to an Appendix B.

Accomplishment of proof for zeros of Xi functions on imaginary axis alone
In last Section we discussed the application of the second mean-value theorem to the function Ξ(z) on the imaginary axis z = iy. Equations (3.14) and (3.15) or their equivalent forms (3.16) or (3.17) are not yet sufficient to derive conclusions about the position of the zeros on the imaginary axis in dependence on x = 0.
We have yet to derive more information about the mean-value functions w 0 (z) which we obtain by relating the real-valued function u 0 (x, y) and v 0 (x, y) to the function u 0 (0, y) on the imaginary axis (v 0 (0, y) = 0).
The general case of complex z can be obtained from the special case z = iy in (4.9) by application of the displacement operator exp −ix ∂ ∂y to the function Ξ(iy) according to The function u 0 (0, y − ix) = w 0 (x + iy) = u 0 (x, y) + iv 0 (x, y) is related to u 0 (0, y) as follows or in more compact form This is presented in Appendix B in more general form for additionally non-vanishing v 0 (0, y) and arbitrary holomorphic functions. It means that we may obtain u 0 (x, y) and v 0 (x, y) by applying the operators cos x ∂ ∂y and − sin x ∂ ∂y , respectively, to the function u 0 (0, y) on the imaginary axis (remind v 0 (0, y) = 0 vanishes there in our case). Clearly, equations (5.2) are in agreement with the Cauchy-Riemann equations ∂u0 ∂x = ∂v0 ∂y and ∂u0 ∂y = − ∂v0 ∂x as a minimal requirement. We now write Ξ(z) in the form equivalent to (5.1)

Ξ(x + iy) = Ω(0)
sh u 0 (x, y) + iv 0 (x, y) (x + iy) The denominator x + iy does not contribute to zeros. Since the Hyperbolic Sine possesses zeros only on the imaginary axis we see from (5.4) that we may expect zeros only for such related variables (x, y) which satisfy the necessary condition of vanishing of its real part of the argument that leads as we already know to (see (3.14)) The zeros with coordinates (x k , y k ) themselves can be found then as the (in general non-degenerate) solutions of the following equation (see (3.15)) u 0 (x, y)y + v 0 (x, y)x = nπ, (n = 0, ±1, ±2, . . .), if these pairs (x, y) satisfy the necessary condition (5.5). Later we will see provides the whole spectrum of solutions for the zeros but we can also obtain each (x k , y k ) separately from one branch n and would they then denote by (x n , y n ). Thus we have first of all to look for such pairs (x, y) which satisfy the condition (5.5) off the imaginary axis that is for x = 0 since we know already that these functions may possess zeros on the imaginary axis z = iy.

Using (5.2) we may represent the necessary condition (5.5) for the proof by the second mean-value theorem in the form
x cos x ∂ ∂y u 0 (0, y) + y sin x ∂ ∂y u 0 (0, y) = 0, (5.7) and equation ( We may represent Eqs. (5.7) and (5.8) in a simpler form using the following operational identities x cos x ∂ ∂y + y sin x ∂ ∂y = sin x ∂ ∂y y, which are a specialization of the operational identities (B.11) in Appendix B with w(z) = u(x, y)+iv(x, y) → z = x+iy and therefore u(x, y) → x, v(x, y) → y. If we multiply (5.7) and (5.8) both by the function u 0 (0, y) then we may write (5.7) in the form (changing order yu 0 (0, y) = u 0 (0, y)y) for arbitrary x is now contained in the conditions (5.10) and (5.11) which we now discuss.
Since cos x ∂ ∂y is a nonsingular operator we can multiply both sides of equation ( This equation is yet fully equivalent to (5.11) for arbitrary x but it provides only the same solutions for the values y of zeros as for zeros on the imaginary axis. This alone already suggests that it cannot be that zeros with x = 0 if they exist possess the same values of y as the zeros on the imaginary axis. But in such form the proof of the impossibility of zeros off the imaginary axis seemed to be not satisfactory and we present in the following some slightly different variants which go deeper into the details of the proof. In analogous way by multiplication of (5.10) with the operator sin x ∂ ∂y and (5.11) with the operator cos x ∂ ∂y and addition of both equations we also obtain condition (5.12) that means u 0 (0, y)y = sin 2 x ∂ ∂y + cos 2 x ∂ ∂y u 0 (0, y)y = nπ, (n = 0, ±1, ±2, . . .), (5.13) The equal conditions (5.12) and (5.13) which are identical with the condition for zeros on the imaginary axis are a necessary condition for all zeros. For each chosen equivalent n (remind u 0 (0, y) depends then on n which we do not mention by the notation) one obtains an infinite series of solutions y k for the zeros of the function Ξ(iy) u 0 (0, y k )y k = nπ, {u 0 (0, y)y} y =y k = nπ, (5.14) whereas for y = y k equation (5.12), by definition of y k , is not satisfied. Supposing that we know u 0 (0, y) that is as a rule not the case, we could solve for each n = 0, ±1, ±2, . . . the usually transcendental equations (5.13) graphically, for example, by drawing the equivalent functions u 0 (0, y)y over variable y as abscissa and looking for the intersections points with the lines nπ over y (Section 7). These intersection points y = y k are the solutions for zeros y k on the imaginary axis. Choosing x = 0 the second condition (5.10) is identically satisfied. Now we have to look for additional zeros (x, y k ) with x = 0. Whereas for zeros with x = 0 the condition (5.10) is identically satisfied we have to examine this condition for zeros with x = 0. In the case of x = 0 we may divide both sides of the condition ( This condition has also to be satisfied for the solution y = y k of (5.12) which make this equation to the identity (5.14) that means that ∂u 0 ∂y (0, y k )y k + u 0 (0, y k ) = ∂u 0 ∂y (0, y k )y k + nπ y k = 0, (5.17) has to be identically satisfied. Moreover, if we apply the operator to condition (5.10) we obtain ∂ m+1 ∂y m+1 (u 0 (0, y)y) = 0, (m = 0, 1, 2, . . .), (5.18) and by multiplication with the inverse operator to the nonsingular operator The same conditions follow also from (5.11) combined with (5.11) by Taylor series expansion with respect to x ∂ ∂y for x = 0 according to by setting all sum terms proportional to x m+1 equal to zero. If we make now a Taylor series expansion of the function u 0 (0, y)y in the neighborhood y = y k of a solution which obeys all conditions (5.14) and (5.20) then we find Thus we can find zeros for x = 0 that means off the imaginary axis if the mean value function u 0 (0, y) on the imaginary axis possess one of the forms for a certain integer n. According to (5.2) the whole mean-value functions u 0 (x, y) and v 0 (x, y) are then 24) or in compact form If we insert w 0 (z)z = inπ into equation . The further discussion of the impossibility of this case is the same as before.
A third variant to show the impossibility for zeros in case of x = 0 is to make the transition to the Fourier transform of u 0 (0, y)y and to solve the equation arising from 5.15) by generalized functions and then making the inverse Fourier transformation. One may show then that this is not compatible with the general solution of (5.11) which determines the position of the zeros. We do not present this variant here.
We have now finally proved that all Xi functions Ξ(z) of the form (3.1) for which the second mean-value theorem is applicable (function Ω(u) positively semi-definite and non-increasing (or also negatively semi-definite and non-decreasing) may possess zeros only on the imaginary axis.

Consequences for proof of the Riemann hypothesis
The given proof for zeros only on the imaginary axis x = 0 for the considered Xi function Ξ(z) = Ξ(x + iy) includes as special case the function Ω(u) to the Riemann hypothesis which is given in (2.26). However, it includes also the whole class of modified Bessel functions of imaginary argument I ν (z) which possess zeros only on the imaginary axis and if we make the substitution z ↔ iz also the usual Bessel function J ν (z) which possess zeros only on the real axis.
We may ask about possible degeneracies of the zeros of the Xi functions Ξ(z) on the imaginary axis z = iy. Our proof does not give a recipe to see whether such degeneracies are possible or not. In case of the Riemann zeta function Ξ(z) ↔ ξ(s) ↔ ζ(s) one cannot expect a degeneracy because the countable number of all nontrivial zeros are (likely) irrational (transcendental, proof?) numbers but we do not know a proof for this.
For Ξ(z) as an entire function one may pose the question of its factorization with factors of the form 1 − z zn where z n goes through all roots where in case of degeneracy the same factors are taken multiple times according to the degeneracy. It is well known that an entire function using its ordered zeros z n , (|z n | ≤ |z n+1 |) can be represented in Weierstrass product form multiplied by an exponential function e h(z) with an entire function function h(z) in the exponent with the result that e h(z) is an entire function without zeros. This possesses the form (e.g., [15]) with a polynomial P k (w) of degree k which depending on the roots z n must be appropriately chosen to guarantee the convergence of the product. This polynomial is defined by first k sum terms in the Taylor series for − log(1 − w) 4 By means of these polynomials the Weierstrass factors are defined as the functions from which follows From this form it is seen that E k (w) possesses the following initial terms of the Taylor series and is a function with a zero at w = 1 but with a Taylor series expansion which begins with the terms 1 − w k+1 k+1 . Hadamard made a precision of the Weierstrass product form by connecting the degree k n of the polynomials in (6.1) with the order ρ of growth of the entire function and showed that k n can be chosen independently of the n-th root z n by k n → k ≥ ρ − 1 The order of Ξ(z) which is equal to 1 is not a strict order ρ = 1 (for this last notion see [15]). However, this does not play a role in the Hadamard product representation of Ξ(z) and the polynomials P kn (w) in (6.1) can be chosen as P 0 (w) that means equal to 0 according to k n = k = ρ − 1. The entire function h(z) in the exponent in (6.1) can be only a constant since in other case it would introduce a higher growth of Ξ(z). Thus the product representation of Ξ(z) possesses the form du Ω(u) = ξ 1 2 = 0.49712... , (6.6) where we took into account the symmetry z −n = −z n = (z n ) * of the zeros and the proof z n = iy n that all zeros lie on the imaginary axis and a zero z 0 = 0 is absent. With Ω 0 we denoted the first moment of the function Ω(u). Formula (6.6) in connection with his hypothesis was already used by Riemann in [1] and later proved by von Mangoldt where the product representation of entire functions by Weierstrass which was later stated more precisely by Hadamard plays a role. There is another formula for an approximation to the number of nontrivial zeros of ζ(s) or ξ(s) which in application to the number of zeros N (Y ) of Ξ(z) on the imaginary axis z = iy (critical line) in the interval between y = 0 and y = Y . It takes on the form (Y for Ξ(z) is equivalent to usual T for ζ(s)) with the logarithmically growing density ν(y) ≈ 1 2π log y 2π , (y 1). (6.8) As long as the Riemann hypothesis was not proved it was formulated for the critical strip 0 ≤ σ ≤ 1 of the complex coordinate s = σ + it in ξ(s) parallel to the imaginary axis and with t between t = 0 and t = T (with T equal to our Y in (6.7)). It was already suggested by Riemann [1] but not proved in detail there and was later proved by von Mangoldt in 1905. A detailed proof by means of the argument principle can be found in [12]. The result of Hardy (1914) (cited in [5]) that there exist an infinite number of zeros on the critical line is a step to the full proof of the Riemann hypothesis. Section 4 of present article may be considered as involving such proof of this last statement.
We have now proved that functions Ξ(z) defined by integrals of the form (3.1) with nonincreasing functions Ω(u) which decrease in infinity sufficiently rapidly in a way that Ξ(z) becomes an entire function of z possess zeros only on the imaginary axis z = iy. This did not provide a recipe to see in which cases all zeros on the imaginary axis are simple zeros but it is unlikely that within a countable sequence of irregularly chosen (probably) transcendental numbers (the zeros) two of them are coincident (it seems to be difficult to formulate last statement in a more rigorous way). It also did not provide a direct formula for the number of zeros in an interval [(0, 0), (0, Y )] from zero to Y on the imaginary axis or of its density there but, as already said, Riemann [1] suggested for this an approximate formula and von Mangoldt proved it The proof of the Riemann hypothesis is included as the special case (2.26) of the function Ω(u) into a wider class of functions with an integral representation of the form (3.1) which under the discussed necessary conditions allowing the application of the second mean-value theorem of calculus possess zeros only on the imaginary axis. The equivalent forms (2.35) and (2.36) of the integral (3.1) where the functions, for example Ω (1) (u), are no more generally non-increasing suggest that conditions for zeros only on the imaginary axis are existent for more general cases than such prescribed by the second mean-value theorem. A certain difference may happen, for example, for z = 0 because powers of it are in the denominators in the representations in (2.36).

Graphical illustration of mean-value parameters to Xi function for the Riemann hypothesis
To get an imagination how the mean-value function w 0 (z) = w 0 (x + iy) looks like we calculate it for the imaginary axis and for the real axis for the case of the function Ω(u) in (2.26) that is possible numerically. From the two equations for general z and for z = 0 , du Ω(u), du Ω(u) Arsh x Ω(0) du Ω(u) 1 − 1 6 x Ω(u)  On the left-hand side there are shown the mean value parameters w 0 (iy) u 0 (0,0) for the Xi function to the Riemann hypothesis if we do not take the values of the function arcsin(t) in the basic range − π 2 ≤ t ≤ π 2 but in equivalent ranges according to (7.8). On the right-hand side are shown the corresponding functions u 0 (0, y)y which according to cos(x + nπ) = (−1) n sin(x) and the condition for zeros cos (u 0 (0, y)y) = 0 lead to equivalent ranges kπ = u 0 (0, y)y ∼ = u 0 (0, y)y + nπ = (k + n)π, (k = ±1, ±2, . . . ; n = 0, ±1, ±2, . . .) (see (4.12)) determine the zeros of the Xi function on the imaginary axis. We see that the multivaluedness of the arcsin(x) function does not spoil a unique result for the zeros because every branch find the corresponding n of nπ where then all zeros lie. Due to extremely rapid decrease of the function u 0 (0, y) with increasing y this is difficult to see (position of first three zero at y 1 ≈ 14.1, y 2 ≈ 21.0, y 3 ≈ 25.0 is shown) but if we separate small intervals of y and enlarge the range of values for y 0 (0, y) this becomes visible (similar as in Fig. 2.3). We do not make this here because this effect is better visible for the modified Bessel functions which we intend to consider at another place. where we applied the first two terms of the Taylor series expansion of arcsin(x) in powers of x. A small problem is here that we get the value for this multi-valued function in the range − π 2 ≤ arcsin(x) ≤ + π 2 . Since 1 x arcsin(x) is an even function with only positive coefficients in its Taylor series the term in braces is in every case positive that becomes important below.
The two curves which we get for w 0 (x) w 0 (0) and for w 0 (iy) w 0 (0) are shown in Fig. 7.4. The function for w 0 (x, 0) on the real axis y = 0 (second partial picture) is not very exciting. The necessary condition xu 0 (x, 0) = 0 (see (5.5)) can be satisfied only for x = 0 but it is easily to see from du Ω(u) ≈ 1.7868 = 0 that there is no zero. For the function w 0 (0, y) on the imaginary axis x = 0 the necessary condition yv 0 (0, y) = 0 (see (5.5)) is trivially satisfied since v 0 (0, y) = 0 and does not restrict the solutions for zeros. In this case only the sufficient condition yu 0 (0, y) = nπ, (n = 0, ±1, ±2, . . .) determines the position of the zeros on the imaginary axis. The first two pairs of zeros are at y 1 ≈ ±14.135, y 2 ≈ ±21.022 and the reason that we do not see them in Fig. 7.4 is the rapid decrease of the function u 0 (0, y) with increasing y. If we enlarge this range we see that the curve goes beyond the y-axis after the first root at 14.135 of the Xi function. As a surprise for the second mean-value method we see that the parameter u 0 (0, y) becomes oscillating around this axis. This means that the roots which are generally determined by the equation yu 0 (0, y) = nπ (see (5.6)) are determined here by the value n = 0 alone. The reason for this is the multi-valuedness of the ArcSine function according to If we choose the values for the arcsin(x)-function not in the basic interval − π 2 ≤ x ≤ + π 2 for which the Taylor series provides the values but from other equivalent intervals according to (7.8) we get other curves for u 0 (0, y) and yu 0 (0, y) from which we also may determine the zeros (see Fig. 7.5), however, with other values n in the relation yu 0 (0, y) = nπ, (n = 0, ±1, ±2, . . .) and the results are invariant with respect to the multi-valuedness. This is better to see in case of the modified Bessel functions for which the curves vanish less rapidly with increasing y as we intend to show at another place. All these considerations do not touch the proof of the non-existence of roots off the imaginary axis but should serve only for better understanding of the involved functions. It seems that the specific phenomenons of the second mean-value theorem (3.9) if the functions g(u) there are oscillating functions (remind, only continuity is required) are not yet well illustrated in detail.
We now derive a few general properties of the function u 0 (0, y) which can be seen in the Figures. From (4.9) written in the form and by Taylor series expansion according to +∞ 0 du Ω(u) cos(uy) = Ω(0)u 0 (0, y) sin (u 0 (0, y)y) u 0 (0, y)y = Ω(0)u 0 (0, y) 1 − y 2 6 (u 0 (0, y)) 2 + . . . , (7.9) follows from the even symmetry of the left-hand side that u 0 (0, y) also has to be a function of the variable y with even symmetry (notation ∂ n u 0 (0, y) ∂y n ≡ u Concretely, we obtain by n-fold differentiation of both sides of (7.9) at y = 0 for the first coefficients of the Taylor series  (7.13) Since the first sum term on the right-hand side is negative and the second is positive it depends from their values whether or not u (2) 0 (0, 0) possesses a positive or negative value. For the special function Ω(u) in (2.26) which plays a role in the Riemann hypothesis we find approximately du Ω(u)u 2 ≈ 0.0229719, 0 (0, 0) ≈ −0.00567784, (7.14) meaning that the second coefficient in the expansion of u 0 (0, y) in a Taylor series in powers of y is negative that can be seen in the first part of Fig. 7.4. However, as we have seen the proof of the Riemann hypothesis is by no means critically connected with some numerical values.
In principle, the proof of the Riemann hypothesis is accomplished now and illustrated and we will stop here. However, for a deeper understanding of the proof it is favorable to consider some aspects of the proof such as, for example, analogues to other functions with a representation of the form (3.1) and with zeros only on the imaginary axis and some other approaches although they did not lead to the full proof that, however, we cannot make here.

Equivalent formulations of the main theorems in a summary
In present article we proved the following main result Theorem 1: Let Ω(u) be a real-valued function of variable u in the interval 0 ≤ u < +∞ which is positive semi-definite in this interval and non-increasing and is rapidly vanishing in infinity, more rapidly than any exponential function exp (−λu), that means

Proof:
The proof of this theorem for non-increasing functions Ω(u) takes on Sections 3-5 of this article. The function Ω(u) in (2.26) satisfies these conditions and thus provides a proof of the Riemann hypothesis.

Remark:
A analogous theorem is obviously true by substituting in (8.2) ch(u) ↔ cos(u) and by interchanging the role of the imaginary and of the real axis y ↔ x. Furthermore, a similar theorem with a few peculiarities is true for substituting ch(uz) in (8.2) by sh(uz) .
Theorem 1 can be formulated in some equivalent ways which lead to interesting consequences 5 . The Mellin transformationf (s) of an arbitrary function f (t) together with its inversion is defined by [32,33,34,35] where the real value c has only to lie in the convergence strip for the definition off (s) by the integral. Formula (8.2) is an integral transform of the function ch(z) and can be considered as the application of an integral operator to the function ch(z) which using the Mellin transformΩ(s) of the function Ω(u) can be written in the following convenient form This is due to where u z ∂ ∂z is the operator of multiplication of the argument of an arbitrary function g(z) by the number u, i.e. it transforms as follows according to the following chain of conclusions starting from the property that all functions z n , (n = 0, 1, 2, . . .) are eigenfunctions of z ∂ ∂z to eigenvalue n z ∂ ∂z z n = nz n , ⇒ f z ∂ ∂z z n = f (n)z n , ⇒ exp λz ∂ ∂z z n = e λ z n , ⇒ exp λz ∂ ∂z g(z) = g e λ z , ⇒ u z ∂ ∂z g(z) = g(uz), (e λ ≡ u). (8.8) This chain is almost obvious and does not need more explanations. The operators u z ∂ ∂z are linear operators in linear spaces depending on the considered set of numbers u.
Expressed by real variables (x, y) and by ∂ ∂z , From this formula follows that Ξ(iy) may be obtained by transformation of ch(iy) = cos(y) alone via =Ω ∂ ∂y y cos(y). (8.10) On the right-hand side we have a certain redundance since in analytic functions the information which is contained in the values of the function on the imaginary axis is fully contained also in other parts of the function (here of ch(z)). The most simple transformation of ch(z) is by a delta function δ(u − u 0 ) as function Ω(u) which stretches only the argument of the Hyperbolic Cosine function ch(z) → ch(u 0 z). The next simple transformation is with a function function Ω(u) in form of a step function θ(u 0 − u) which leads to the transformation ch(z) → 1 z sh(u 0 z). Our application of the second mean-value theorem reduced other cases under the suppositions of the theorem to this case, however, with parameter u 0 = u 0 (z) depending on complex variable z.
The great analogy between displacement operators (infinitesimal −i ∂ ∂x ) of the argument of a function and multiplication operator (infinitesimal x ∂ ∂x ) of the argument of a function with respect to the role of Fourier transformation and of Mellin transformation can be best seen from the following two relations We remind you that Mellin and Fourier transform are related by substituting the integration variables u = e y and the independent variables s = −it and by the substitutions f (e y ) ↔ f (y) and f (−it) ↔f (t) in (8.11).
Using the discussed Mellin transformation Theorem 1 can be reformulated as follows and Fig. 2.2). In case of the (modified) Bessel functions we find by partial integration (e.g., [32]) 14) where the functions in the second transform 1 − u 2 ν− 3 2 u for ν > 3 2 are non-negative but not monotonic and possess a maximum for a certain value u max within the interval 0 < u max < 1. The forms (8.13) for Ξ(z) and (8.14) suggest that there should be true a similar theorem to the integral in (8.2) with substitution ch(uz) → sh(uz) and that monotonicity of the corresponding functions should not be the ultimate requirement for the zeros in such transforms on the imaginary axis.
Another consequence of the Theorem 1 follows from the non-negativity of the squared modulus of the function Ξ(z) resulting in the obvious inequality which can be satisfied with the equality sign only on the imaginary axis z = iy for discrete values y = y k (the zeros of Ξ(z = x + iy)). By transition from Cartesian coordinates (u 1 , u 2 ) to inertialpoint coordinates (u, ∆u) according to Proof: As a consequence of proved Theorem 1 it is also proved. The sufficient condition that this inequality is satisfied with the equality sign is that we first set x = 0 in the expressions on the right-hand side of (8.15) and that we then determine the zeros y = y k of the obtained equation for Ξ(iy) (Ξ(iy)) * = 0. In case of indefinite Ω(u) there are possible in addition zeros on the x-axis.

Remark:
Practically, (8.15) is an inequality for which it is difficult to prove in another way that it can be satisfied with the equality sign only for x = 0. Proved in another way with specialization (2.26) for Ω(u) it would be an independent proof of the Riemann hypothesis.

Conclusion
We proved in this article the Riemann hypothesis embedded into a more general theorem for a class of functions Ξ(z) with a representation of the form (3.1) for real-valued functions Ω(u) which are positive semi-definite and non-increasing in the interval 0 ≤ u < +∞ and which are vanishing in infinity more rapidly than any exponential function exp (−λu) with λ > 0. The special Xi function Ξ(z) to the function Ω(u) given in (2.26) which is essentially the xi function ξ(s) equivalent to the Riemann zeta function ζ(s) concerning the hypothesis belongs to the described class of functions. Modified Bessel functions of imaginary argument 'normalized' to entire functions ( 2 iz ) ν J ν (iz) = ( 2 z ) ν J ν (z) for ν ≥ 1 2 belong also to this class of functions with a representation of the form (3.1) with Ω(u) which satisfy the mentioned conditions and in this last case it is well known and proved in independent way that their zeros lie only on the imaginary axis corresponding to the critical line in the Riemann hypothesis. Knowing this property of the modified Bessel functions we looked from beginning for whole classes of functions including the Riemann zeta function which satisfy analogous conditions as expressed in the Riemann hypothesis. The details of the approach to Bessel functions and also to certain classes of almost-periodic functions we prepare for another work.
The numerical search for zeros of the Riemann zeta function ζ(s), s = σ + it in the critical strip, in particular, off the critical line may come now to an end by the proof of the Riemann hypothesis since its main purpose was, in our opinion, to find a counter-example to the Riemann hypothesis and thus to disprove it. We did not pay attention in this article to methods of numerical calculation of the zeros with (ultra-)high precision and for very high values of the imaginary part. However, the proof if correct may deliver some calculators now from their pain to calculate more and more zeros of the Riemann zeta function.
We think that some approaches in this article may possess importance also for other problems. First of all this is the operational approach of the transition from real and imaginary part of a function on the real or imaginary axis to an analytic function in the whole complex plane. In principle, this is possible using the Cauchy-Riemann equations but the operational approach integrates this to two integer instead of differential equations. We think that this is possible also in curved coordinates and is in particular effective starting from curves of constant real or imaginary part of one of these functions on a curve.
One of the fascinations of prime number theory is the relation of the apparently chaotic distribution function of prime numbers π(x) on the real axis x ≥ 0 to a fully well-ordered analytic function, the Riemann zeta function ζ(s), at least, in its representation in sum form as a special Dirichlet series and thus providing the relations between multiplicative and additive representations of arithmetic functions.

Appendix A. Transformation of the Xi function
In this Appendix we transform the function ξ(s) defined in (2.8) by means of the zeta function ζ(s) from the form taken from (2.5) to the form (2.9) using the Poisson summation formula. The Poisson summation formula is the transformation of a sum over a lattice into a sum over the reciprocal lattice. More generally, in one-dimensional case the decomposition of a special periodic function F (q) = F (q + a) with period a defined by the following series over functions f (q + na) can be transformed into the reciprocal lattice providing a Fourier series as follows. For this purpose we expand F (q) in a Fourier series with Fourier coefficients F m and make then obvious transfor- where the coefficients F m of the decomposition of F (q) are given by the Fourier transformf (k) of the function f (q) defined in the following waỹ Using the period b = 2π a of the reciprocal lattice relation on the right-hand side of (A.2) it may be written in the forms In the special case q = 0 one obtains from (A.2) the well-known basic form of the Poisson summation formula with Fourier transformf (k) = exp − k 2 4π provides a relation which can be written in the following symmetric form (we need it in the following only for q ≥ 0) This is essentially a transformation of the Theta function ϑ 3 (u, q) in special case 2Ψ (q) √ |q| ≡ ϑ 3 0, e −πq 2 .
We now apply this to a transformation of the function ξ(s). From (2.9) and (2.5) follows The second term in braces is convergent for arbitrary q due to the rapid vanishing of the summands of the sum for q → ∞. To the first term in braces we apply the Poisson summation formula (A. 5) and obtain from the special result (A.6) with the substitution q = 1 q of the integration variable made in last line. Thus from (A.7) we find With the substitution of the integration variable q = e u , (q ≥ 0, ⇔ −∞ < u < +∞) , (A. 10) and with displacement of the complex variable s to z ≡ s + 1 2 and introduction of Ξ (z) instead of ξ(s) this leads to the representation We checked relations (A.17) numerically by computer up to a sufficiently high precision. We also could not find (??) among the known transformations of theta functions. The interesting feature of these sum evaluations is that herein power functions as well as exponential functions containing the transcendental number π in the exponent are involved in a way which finally leads to a rational number that should also be attractive for recreation mathematics. In contrast, in the well-known series for the trigonometric functions one obtains for certain rational multiples of π as argument also rational numbers but one has involved there only power functions with rational coefficients that means rational functions although an infinite number of them. where the contribution from the lower integration limit at u = 0 has exactly canceled the constant term 1 2 on the right-hand side of (A.18) and the contributions from the upper limit x → +∞ is vanishing. Using (A.16) we find with abbreviation Ω(u) according to In next Appendix we consider the transition from analytic functions given on the real or imaginary axis to the whole complex plane.

Appendix B.
Transition from analytic functions on real or imaginary axis to whole complex plane 3) in a form which we call operational form and meaning that they may be applied to further functions on the left-hand and correspondingly righthand side 6 . It is now easy to see that an analytic function w(z) = w(x + iy); ∂ ∂z * w(z) = 0 can be generated from w(x) on the x-axis in operational form by (B.8) 6 Non-operational form would be if we write, for example, exp iy ∂ ∂x x = x + iy instead of (B.2) which is correct but cannot be applied to further functions f (x) = const · 1, for example to f (x) = x.) and is equivalent to (B.10). Analogously by expansion in powers of x as intermediate step we obtain w(x + iy) = cos x ∂ ∂y u(0, y) + sin x ∂ ∂y v(0, y) +i cos x ∂ ∂y v(0, y) − sin x ∂ ∂y u(0, y) = u(x, y) + iv(x, y), (B.16) that is equivalent to (B.11). Therefore, relations (B.15) and (B.16) represent some integral forms of the Cauchy-Riemann equations.
In cases if one of the functions u(x, 0) or v(x, 0) in (B.10) or u(0, y) or v(0, y) in (B.12) is vanishing these formulae simplify and the case v(0, y) = 0 is applied in Sections 4-6. We did not find up to now such representations in textbooks to complex analysis but it seems possible that they are somewhere.