_{1}

^{*}

A
quantum time-dependent spectrum analysis
, or simply,
quantum spectral analysis
(QSA) is presented in this work, and it’s based on Schrödinger’s equation. In the classical
world, it is named frequency in time (FIT), which is used here as a complement of the traditional frequency-dependent spectral analysis based on Fourier theory. Besides, FIT is a metric which assesses
the impact of the flanks
of a signal
on
its frequency spectrum
,
not taken
into account by
Fourier theory and let
s
alone
in real time. Even more, and unlike all derived tools from Fourier Theory (i.e., continuous, discrete, fast, short-time, fractional and quantum Fourier Transform, as well as, Gabor) FIT has the following advantages, among others:
1
) compact support with excellent energy output treatment,
2
) low computational cost, O(N) for signals and O(N^{2}) for images,
3
) it does not have
phase
uncertainties (i.e., indeterminate phase for a magnitude = 0) as in the case of Discrete and Fast Fourier Transform (DFT, FFT, respectively). Finally, we can apply QSA to a quantum signal, that is, to a qubit stream in order to analyze it spectrally.

The main concepts related to Quantum Information Processing (QIP) may be grouped in the next topics: quantum bit (qubit, which is the elemental quantum information unit), Bloch’s Sphere (geometric environment for qubit representation), Hilbert’s Space (which generalizes the notion of Euclidean space), Schrödinger’s Equation (which is a partial differential equation that describes how the quantum state of a physical system changes with time), Unitary Operators, and Quantum Circuits. In quantum information theory, a quantum circuit is a model for quantum computation in which a computation is a sequence of quantum gates; which are reversible transformations on a quantum mechanical analog of an n-bit register (this analogous structure is referred to as an n-qubit register). Another group is Quantum Gates; a quantum logic gate is a basic quantum circuit operating on a small number of qubits (in quantum computing and specifically the quantum circuit model of computation). Finally, Quantum Algorithms which run on a realistic model of quantum computing, are being the most commonly quantum circuit used for computation [

The main idea is to take a classical signal, sample it, quantify it (for example, between 0 and 2^{8 }− 1), use a classical-to-quantum interface, give an internal representation of that signal, process that quantum signal (by denoising it, compressing it, etc.), measure the result, use a quantum-to-classical interface and subsequently detect the classical outcome signal. Interestingly, and as we will see later, quantum image processing has aroused more interest than QSP, quoting its creator: “Many new classes of signal processing algorithms have been developed by emulating the behavior of physical systems. There are also many examples in the signal processing literature in which new classes of algorithms have been developed by artificially imposing physical constraints on implementations that are not inherently subject to these constraints” [

In quantum computing, the QFT is a linear transformation on quantum bits and it is the quantum version of the discrete Fourier transform. The QFT is a part of many quantum algorithms: especially Shor’s algorithm for factoring and computing the discrete logarithm; the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator; and algorithms for the hidden subgroup problem.

The QFT can be performed efficiently on a quantum computer, with a particular decomposition into a product of simpler unitary matrices. Using a simple decomposition, the discrete Fourier transform can be implemented as a quantum circuit consisting of only O(n^{2}) Hadamard gates and controlled phase shift gates, where n is the number of qubits [^{2}) gates (where n is the number of bits), which is exponentially more than O(n^{2}). However, the quantum Fourier transform acts on a quantum state, whereas, the classical Fourier transform acts on a vector. Therefore not all the tasks that use the classical Fourier transform can take advantage of this exponential speedup; since, the best QFT algorithms known today require only O(n log n) gates to achieve an efficient approximation [

Finally, this work is organized as follows: Fourier Theory is outlined in Section 2, where, we present the following concepts inside Fourier’s Theory: Fourier Transform, Discrete Fourier Transform, and Fast Fourier Transform. In Section 3, we show the proposed new spectral methods with its consequences. Section 4 provides conclusions and a proposal for future works.

In this section, we discuss the tools which are needed to understand the full extent QSA. These tools are: Fourier Transform, Discrete Fourier Transform (DFT), and Fast Fourier Transform (FFT). They were developed based on a main concept: the uncertainty principle, which is fundamental to understand the theory behind QSA-FIT. Other transforms, which are members of the Fourier Theory too, like Fractional Fourier Transform (FRFT), Short-Time Fourier Transform (STFT), and Gabor transform (GT); make a poor contribution in pursuit of solving the problems of the Fourier Theory described in the Abstract. That is to say, the need for a time-dependent spectrum analysis, undoubtedly including the wavelet transform in general and Haar basis in particular.

What the ubiquity of QSA in the context of a much larger modern and full spectral analysis should be clear at the end of this section.

On the other hand, this section will allow us to better understand the role of QSA as the origin of several currently-used-today-tools in Digital Signal Processing (DSP), Digital Image Processing (DIP), Quantum Signal Processing (QSP) and Quantum Image Processing (QIP). Finally, it will be clear why we say that QSA crowns a set of tools insufficient to date.

The Fourier Transform decomposes a function of time (a signal) into the frequencies that make it up, in the same way as a musical chord can be expressed as the amplitude (or loudness) of its constituent notes. The Fourier transform of a function of time itself is a complex-valued function of frequency whose absolute value represents the present amount of that frequency in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency.

The Fourier transform is called the frequency domain representation of the original signal. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a common language, the domain of the original function is frequently referred to as the time domain. For many functions of practical interest, we can define an operation that reverses this: the inverse Fourier transformation, also called Fourier synthesis of a frequency domain representation, which combines the contributions of all the different frequencies to recover the original function of time [

Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which is sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so that some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to an ordinary multiplication in the frequency domain. Concretely, this means that any linear time-invariant system, such as a filter applied to a signal, can be expressed in a relatively simple way as an operation on frequencies. After performing the desired operations, the transformation of the result can be made backwards, towards the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are “simpler” in one or the other, and has deep connections to almost all areas of modern mathematics [

Functions that are localized in the time domain have Fourier transforms (FT) that are spread out across the frequency domain and vice versa, a phenomenon that is known as the Uncertainty Principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The FT of a Gaussian function is another Gaussian function. Joseph Fourier introduced the transform in his study of heat transfer where Gaussian functions appear as solutions of the heat equation [

In mathematics, the discrete Fourier transform (DFT) converts a finite list of equally spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered according to their frequencies. The frequency domain has the same number of values as the original samples of time domain. DFT can be said to convert the sampled function from its original domain (often time or position along a line) to the frequency domain [

Both, the input samples (which are complex numbers and in practice they are usually real ones), and the output coefficients are complex. The frequencies of the output sinusoids are integer multiples of a fundamental frequency whose corresponding period is the length of the sampling interval. The combination of sinusoids obtained through the DFT is therefore periodic with that same period. The DFT differs from the discrete-time Fourier transform (DTFT) in that their input and output sequences are both finite; it is therefore said to be the Fourier analysis of finite-domain (or periodic) discrete-time functions [

Since it deals with a finite mass of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms; so much so that the terms “FFT” and “DFT” are often used interchangeably. Prior to its current usage, the “FFT” acronym may have also been used for the ambiguous term “Finite Fourier Transform” [

No Compact Support If DFT is the following product X = Wx, where X is a complex output vector, W is a matrix of complex twiddle factors, and x is the real input vector; therefore, we can see that each element X_{k} of output vector results from multiplying the kth row of the matrix by the complete input vector; that is to say, each element X_{k} of output vector contains every element of the input vector. A direct consequence of this is that DFT spills the energy to its output, in other words, DFT has a disastrous treatment of the output energy. Therefore, no compact support is equivalent to:

・ DFT has a bad treatment of energy in the output;

・ DFT is not a time-varying transform, but a frequency-varying transform.

Time-domain vs. frequency-domain measurements As we can see in

Both points of view allow us to make an almost complete analysis of the main characteristics of the signal [

Spectral Analysis When the DFT is used for signal spectral analysis, the { x n } sequence usually represents a finite set of uniformly spaced time-samples of some signal x(t), where t represents time. The conversion from continuous time to samples (discrete-time), changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT); which generally entails a type of distortion called Aliasing. The choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion.

Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called Leakage, which is manifested as a loss of detail (also known as Resolution) in the DTFT. The choice of an appropriate length for the sub-sequence is the primary key to minimize that effect. When the available data (and the time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs; for example, to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, calculating the average of the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum; (also called a Periodogram in this context). Two examples of such techniques are the Welch method, and the Bartlett method, the general subject of estimating the power spectrum of a noisy signal is called Spectral Estimation.

DFT itself, can also lead to distortion (or perhaps illusion), because it is just a discrete sampling of the DTFT-which is a function of ax continuous frequency domain. Increasing the resolution of the DFT can mitigate the problem. That procedure is illustrated by sampling the DTFT [

・ The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued samples is more than offset by the inherent efficiency of the FFT.

・ As already noted, leakage imposes a limit on the inherent resolution of the DTFT. Therefore, benefits obtained from a fine-grained DFT are limited.

The most important disadvantages of DFT are summarized below.

Disadvantages:

・ DFT fails at the edges. This is the reason why in the JPEG algorithm (used in image compression), we use the DCT instead of the DFT [

・ As there is no compact support, and in order to arrive at the frequency domain, the corresponding element by element between the two domains (time and frequency) is lost, resulting in a poor treatment of energy.

・ As a consequence of not having compact support, DFT is not time present. In fact, it moves away from the time domain. For this reason, in the last decades, the scientific community has created some palliative measures with better performance in both domains simultaneously; i.e., time and frequency. Such tools are: STFT, GT, and wavelets.

・ DFT has phase uncertainties (indeterminate phase for magnitude = 0) [

・ As it arises from the product of a matrix by a vector, its computational cost is O(N^{2}) for signals (1D), and O(N^{4}) for images (2D).

All this would seem to indicate that it is an inefficient transform; however, there are several advantages which justify its use in the last centuries. See [

Fast Fourier Transform FFT inherits all the disadvantages of the DFT, except the computational complexity. In fact, and unlike DFT, the computational cost of FFT is O(N*log_{2}N) for signals (1D), and O((N*log_{2}N)^{2}) for images (2D). This is the reason why, it is called fast Fourier transform.

FFT is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse. Fourier analysis converts a signal from its original domain (often time or space) to the frequency domain and vice versa. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors [^{2}), which arises if we simply apply the definition of DFT to O(N*log_{2}N), where N is the data size. The computational cost of this technique is never greater than the conventional approach; in fact, it is usually significantly less. Further, the computational cost as a function of n is highly continuous, so that linear convolution of sizes somewhat larger than a power of two.

FFT is widely used for many applications in engineering, science, and mathematics. The basic ideas were made popular in 1965, however some algorithms were derived as early as 1805 [

In quantum mechanics, the uncertainty principle [

They cannot be simultaneously and arbitrarily measured with high precision. There is a minimum for the product of uncertainties of these two measurements. First introduced in 1927 by the German physicist Werner Heisenberg, the Heisenberg’s uncertainty principle states that the more precisely the position of a particle is determined, the less precisely its momentum can be known, and vice versa. The formal inequality relating the uncertainty of energy Δ E and the uncertainty of time Δ t was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:

Δ E Δ t ≥ ℏ /2 (1)

where ħ is the reduced Planck constant, h/2π. The energy associated with such system is

E = ℏ ω (2)

where ω = 2πf, being f the frequency, and ω the angular frequency. Then, any uncertainty about ω, is transferred to the energy; that is to say:

Δ E = ℏ Δ ω (3)

Replacing Equation (3) into (1), we will have:

ℏ Δ ω Δ t ≥ ℏ / 2 (4)

Finally, simplifying Equation (4), we will have:

Δ ω Δ t ≥ 1 / 2 (5)

Equation (5) tells us that a simultaneous decimation in time and frequency is impossible for FFT. Therefore, we must make do with decimation in time or frequency, but not both at once. Linking the last four transforms individually (STFT, GT, FrFT, and WT), each sample in time with its counterpart in frequency in a biunivocal correspondence represents a futile effort to date. That is to say, they are transforms without compact support; with the exception of (WT) which sometimes does [

In this section, we will see the three main players of quantum information processing: the elemental unit of quantum information or qubit (i.e., a quantum bit), the Schrödinger’s equation, and the quantum measurement problem.

Since Quantum Mechanics is formulated in projective Hilbert space, then, we need to appeal to the Bloch’s sphere, see

In this context, the complete wave function will be:

| ψ 〉 = e i γ ( cos θ 2 | 0 〉 + e i ϕ s i n θ 2 | 1 〉 ) = e i γ ( cos θ 2 | 0 〉 + ( cos ϕ + i sin ϕ ) s i n θ 2 | 1 〉 ) (6)

where 0 ≤ θ ≤ π , 0 ≤ ϕ < 2 π . We can ignore the factor e i γ because it has no observable effects [

| ψ 〉 = cos θ 2 | 0 〉 + e i ϕ s i n θ 2 | 1 〉 (7)

The numbers θ and ϕ define a point on the unit three-dimensional Bloch sphere, as shown in

| ψ 〉 = α | 0 〉 + β | 1 〉 . (8)

Besides, a column vector | ψ 〉 is called a ket vector [ α β ] T , where (•)^{T} means transpose of (•); while a row vector 〈 ψ | is called a bra vector [ α * β * ] . The numbers α * and β * are the complex conjugate of α and β numbers respectively, although thinking of them as real numbers for many purposes does not hurt. In other words, the state of a qubit is a vector in a two-dimensional complex vector space. The special states | 0 〉 and | 1 〉 have a crucial importance in quantum computing and they are known as Computational Basis States (CBS) and form an orthonormal basis for this vector space, being

Spin down = | ↓ 〉 = | 0 〉 = [ 1 0 ] = qubit basis state = North Pole (9)

and

Spin up = | ↑ 〉 = | 1 〉 = [ 0 1 ] = qubit basis state = South Pole (10)

Finally, if the wave function is on the sphere, | ψ 〉 will be a pure state, with,

〈 ψ | ψ 〉 = [ α * β * ] [ α β ] = | α | 2 + | β | 2 = 1 (11)

A quantum state can be transformed into another state by a unitary operator, symbolized as U (U: H → H on a Hilbert space H, being called a unitary operator if it satisfies U † U = U U † = I , where ( • ) † is the adjoin of (•), and I is the identity matrix), which is required to preserve inner products: If we transform | χ 〉 and | ψ 〉 to U | χ 〉 and U | ψ 〉 , then 〈 χ | U † U | ψ 〉 = 〈 χ | ψ 〉 . In particular, unitary operators preserve lengths:

〈 ψ | U † U | ψ 〉 = 〈 ψ | ψ 〉 = 1 , (12)

That is to say, it is equal to Equation (11). Besides, the unitary operator satisfies the following differential equation known as the Schrödinger equation [

d d t U ( t + Δ t , t ) = − i H ^ ℏ U ( t + Δ t , t ) (13)

where H ^ represents the Hamiltonian matrix of the Schrödinger equation, while i = − 1 2 , and ℏ is the reduced Planck constant; i.e.: ℏ = h / 2 π . Multiplying both sides of Equation (13) by | ψ ( t ) 〉 and setting

| ψ ( t + Δ t ) 〉 = U ( t + Δ t , t ) | ψ ( t ) 〉 (14)

Being U ( t + Δ t , t ) = U ( t + Δ t − t ) = U ( Δ t ) a unitary transform (operator and matrix), yields

d d t | ψ ( t ) 〉 = − i H ^ ℏ | ψ ( t ) 〉 (15)

The Hamiltonian operator represents the total energy of the system and controls the evolution process. In the most general case, the Hamiltonian is formed by kinetic and potential energy. However, if the particle is stationary thus the kinetic energy is canceled, leaving only the potential energy which will be the only one that will be linked to external forces applied to this particle. Thus the control of the external forces is at the same time the control of the evolution of the states of the system [

σ ⋅ P = ( P z P x − i P y P x + i P y − P z ) , (16)

being σ = ( σ x , σ y , σ z ) Pauli’s matrices, that is to say:

σ x = ( 0 1 1 0 ) , σ y = ( 0 − i i 0 ) , σ z = ( 1 0 0 − 1 ) , (17)

while spin will be,

S = ℏ σ = ℏ m s σ = ℏ m s ( σ x , σ y , σ z ) . (18)

Then, the Hamiltonian takes the following form,

H = c S ⋅ P ℏ = c m s ℏ σ ⋅ P ℏ = c m s ℏ ℏ ( P z P x − i P y P x + i P y − P z ) = ℏ m s Ω (19)

being c the speed of light, Ω will result in this case:

Ω = c ℏ ( P z P x − i P y P x + i P y − P z ) (20)

Now, if we consider a spatially isotropic and homogeneous Ω and a polarization of spin regarding the z-axis exclusively, thus,

Ω z = c ℏ ( P z 0 0 − P z ) = c ℏ P z ( 1 0 0 − 1 ) = c ℏ P z σ z (21)

with

H = ℏ m s Ω z = ℏ m s c ℏ P z σ z = ℏ m s ω σ z (22)

where ω is the angular frequency.

Finally, solving Equation (15) depending on the Hamiltonian of Equation (22), we will have the solution to the Schrödinger equation given by the matrix exponential of the Hamiltonian matrix, that is to say;

| ψ ( t + Δ t ) 〉 = e − i H ^ Δ t ℏ | ψ ( t ) 〉 (if Hamiltonian is not time-dependent) (23)

or

| ψ ( t + Δ t ) 〉 = e − i ℏ ∫ t t + Δ t H ^ d t | ψ ( t ) 〉 (if Hamiltonian is time-dependent) (24)

Discrete versions of Equations (23) and (24) for a time-dependent (or not) Hamiltonian, being k the discrete time will be:

| ψ k + Δ k 〉 = e − i H ^ Δ k ℏ | ψ k 〉 = e − i m s ω k σ z Δ k | ψ k 〉 (if Hamiltonian is not time-dependent) (25)

and

| ψ k + Δ k 〉 = e − i ℏ ∑ k k + Δ k H ^ k | ψ k 〉 = e − i m s σ z ∑ k k + Δ k ω k | ψ k 〉 (if Hamiltonian is time-dependent) (26)

| ψ k + 1 〉 = e − i m s σ z ∑ i = 1 k + 1 ω i | ψ 0 〉 (with Δ k = 1 and starting from initial state | ψ 0 〉 [

On the other hand, replacing Equation (22) into Equation (23), we will have another main equation for this paper,

| ψ ( t + Δ t ) 〉 = e − i m s ω ( t ) σ z Δ t | ψ ( t ) 〉 , (28)

and into Equation (24)

| ψ ( t + Δ t ) 〉 = e − i m s σ z ∫ t t + Δ t ω ( t ) d t | ψ ( t ) 〉 . (29)

Finally, considering an incremental approximation of Equation (15) as well as in its discrete version, and considering the proper replacements of Equation (22), both versions of Schrödinger’s equation will take the following form respectively,

| Δ ψ ( t + Δ t ) 〉 Δ t = − i m s ω ( t ) σ z | ψ ( t + Δ t ) 〉 (30)

and

| Δ ψ k + Δ k 〉 = | ψ k + Δ k + 1 − ψ k + Δ k − 1 〉 2 = − i m s ω k σ z | ψ k + Δ k 〉 (31)

These last equations will be fundamental in Section 4.

In quantum mechanics, measurement is a non-trivial and highly counter-intuitive process [

Quantum measurements are described by a set of measurement operators { M ^ m } , index m labels the different measurement outcomes, which act on the state space of the system being measured. That is to say, measurement outcomes correspond to values of observables, such as position, energy, and momentum, which are Hermitian operators [

p ( m ) = 〈 ψ | M ^ m † M ^ m | ψ 〉 (32)

and the post-measurement quantum state is

| ψ 〉 p m = M ^ m | ψ 〉 〈 ψ | M ^ m † M ^ m | ψ 〉 (33)

where subscript pm means post-measurement. Besides, operators M ^ m must satisfy the completeness relation of Equation (34), because that guarantees that probabilities will sum to one; see Equation (35) [

∑ m M ^ m † M ^ m = I (34)

∑ m 〈 ψ | M ^ m † M ^ m | ψ 〉 = ∑ m p ( m ) = 1 (35)

Let us illustrate with a simple example. Let’s assume we have a polarized photon with associated polarization orientations ‘horizontal’ and ‘vertical’. The horizontal polarization direction is denoted by | 0 〉 and the vertical polarization direction is denoted by | 1 〉 . Therefore, an arbitrary initial state for our photon can be described by the quantum state | ψ 〉 = α | 0 〉 + β | 1 〉 (recalling Subsection 3.1, Equation 8), where α and β are complex numbers constrained by the very famous normalization condition | α | 2 + | β | 2 = 1 , and { | 0 〉 , | 1 〉 } is the computational basis (or CBS) spanning Η 2 . Then, we construct two measurement operators M ^ 0 = | 0 〉 〈 0 | and M ^ 1 = | 1 〉 〈 1 | with two measurement outcomes a 0 , a 1 . Thus, the full observable used for measurement in this experiment will be the diagonal matrix M ^ = a 0 | 0 〉 〈 0 | + a 1 | 1 〉 〈 1 | , i.e., the complete matrix. According to the postulate, the probabilities of obtaining outcome a 0 or outcome a 1 are given by p ( a 0 ) = | α | 2 and p ( a 1 ) = | β | 2 . Corresponding post-measurement quantum states are as follows: if outcome = a 0 , then | ψ 〉 p m = | 0 〉 ; if outcome = a 1 then | ψ 〉 p m = | 1 〉 .

This tool plays a main role in the study of quantum entanglement [

Next, we are going to deduce this operator in its continuous and discrete forms. There are several versions of QSA-FIT [

〈 ψ ( t ) | Δ ψ ( t ) Δ t 〉 = − i m s ω ( t ) 〈 ψ ( t ) | σ z | ψ ( t ) 〉 (36)

then,

Δ ω ( t ) = m s ω ( t ) = i 1 〈 ψ ( t ) | σ z | ψ ( t ) 〉 〈 ψ ( t ) | Δ ψ ( t ) Δ t 〉 . (37)

Now, if we multiply both sides of Equation (31) by 〈 ψ k | , we will have:

〈 ψ k | ψ k + 1 − ψ k − 1 〉 2 = − i m s ω k 〈 ψ k | σ z | ψ k 〉 (38)

then,

Δ ω k = m s ω k = i 〈 ψ k | ψ k + 1 − ψ k − 1 〉 2 〈 ψ k | σ z | ψ k 〉 = i ( 〈 ψ k | ψ k + 1 〉 − 〈 ψ k | ψ k − 1 〉 ) 2 〈 ψ k | σ z | ψ k 〉 (39)

That is to say, we are going to have a Δω at each instant of the signal (continuous or discrete, classical or quantum). On the other hand, a very interesting attribute of this operator is that it is not affected by the quantum measurement problem, because its output is a classical scalar, in other words, it can be measured with complete accuracy. In fact, the operator Δω is a hybrid algorithm with quantum and classical parts, as we can see in

Quantum part:

a k = 〈 ψ k | ψ k + 1 〉 b k = 〈 ψ k | ψ k − 1 〉 c k = 〈 ψ k | σ z | ψ k 〉 (40)

Classical part:

Δ ω k = m s ω k = i ( a k − b k ) 2 c k (41)

Finally, for all mentioned cases, that is to say, continuous or discrete, classical or quantum signals, the bandwidth BW will result from the difference between the maximum and the minimum frequency of such signal,

B W = f max − f min = 1 2 π ( Δ ω max − Δ ω min ) (42)

In no other way is the application of QSA-FIT more conspicuous than in this case. There are several versions and ways to apply QSA-FIT to a classical signal

[

ω ( t ) = η s ( t ) d s ( t ) d t , (43)

where s(t) is the signal, and η is an adjustment factor. While the discrete version will be:

ω k = η s k ( s k + 1 − s k − 1 ) 2 . (44)

The problem with Equations (43) and (44) consists in the indeterminacy of Δω when the signal is null at that instant. Then, we will use a modified version of the signal called baseline less (BLL) which consists of,

ω ( t ) = 1 s B L L d s ( t ) d t , (45)

with η = 1, where,

s B L L = s max − s min 2 , (46)

then,

ω ( t ) = 1 ( s max − s min 2 ) d s ( t ) d t , (47)

with,

f max = 1 2 π 1 ( s max − s min 2 ) ( d s ( t ) d t ) max = ( d s ( t ) / d t ) max π ( s max − s min ) , (48)

and,

f min = 1 2π 1 ( s max − s min 2 ) ( d s ( t ) d t ) min = ( d s ( t ) / d t ) min π ( s max − s min ) . (49)

Now, if we consider a signal like

s ( t ) = A cos ( ω t + φ ) + B , (50)

where A is the amplitude, φ is the phase, and B is the baseline, with,

d s ( t ) d t = − A ω sin ( ω t + φ ) , (51)

then,

s max = A + B s min = − A + B (52)

Now, replacing Equations (51) and (52) into (47), we will have:

ω ( t ) = 1 ( ( A + B ) − ( − A + B ) 2 ) ( − A ω sin ( ω t + φ ) ) = − ω sin ( ω t + φ ) , (53)

in green in

f max = ω 2π = 2 π f 2π = f f min = − ω 2π = − 2 π f 2π = − f (54)

So, replacing Equation (54) into (42), we will have:

B W = f max − f min = f − ( − f ) = 2 f . (55)

This result can be seen in the lower part of

s ( t ) = A g a t e ( ω t + φ ) + B , (56)

where A is the amplitude, φ is the phase, and B is the baseline; with,

d s ( t ) d t = A ω d g a t e ( ω t + φ ) d t , (57)

where the derivative of the gate can have only 3 possible values,

d s ( t ) d t = { A ω ∞ 0 − A ω ∞ (58)

then, if,

s max = A + B s min = − A + B (59)

So far, we have obtained similar results to the previous case in relation to s_{max} and s_{min}, however, the true difference is in everything related to the derivative. In this case, the perfect gate takes values ± ∞ . Now, replacing Equations (58) and (59) into (47), we will have:

ω ( t ) = 1 ( ( A + B ) − ( − A + B ) 2 ) { A ω ∞ 0 − A ω ∞ = { ω ∞ 0 − ω ∞ (60)

in green in

f max = ω ∞ 2π = 2 π f ∞ 2π = f ∞ = ∞ f min = − ω ∞ 2π = − 2 π f ∞ 2π = − f ∞ = − ∞ (61)

Then, replacing Equation (61) into (42), we will have:

B W = f max − f min = ∞ − ( − ∞ ) = 2 ∞ = ∞ (62)

Quantum Information Processing has two fundamental tools permanently used in Quantum Computing and Communications: the Principle of Superposition and Quantum Entanglement [

| Ψ A B 〉 ≠ | ψ A 〉 ⊗ | ψ B 〉 (63)

where “ ⊗ ” indicates the Kronecker’s product (also known as a tensor product), while | ψ A 〉 and | ψ B 〉 are vectors providing the states of both subsystems, such as elementary particles [_{A} and { | v B 〉 } for H_{B}. In H_{A} ⊗ H_{B}, the most general state is of the form:

| Ψ A B 〉 = ∑ x , y r x y | u A 〉 ⊗ | v B 〉 . (64)

This state is separable if there are vectors [ r u A ] , [ r v B ] so that r u v = r u A r v B yielding | ψ A 〉 = ∑ u r u A u A and | ψ B 〉 = ∑ v r v B v B . It is inseparable if for any pair of vectors [ r u A ] , [ r v B ] at least for one pair of coordinates r u A , r v B we have r u v ≠ r y A r v B . If a state is inseparable, it is called an entangled state.

Moreover, in 1935 Albert Einstein, Boris Podolsky and Nathan Rosen (EPR) suggested a thought experiment by which they tried to demonstrate that the wave function did not provide a complete description of physical reality (which gives rise to the famous EPR paradox); and hence that the Copenhagen interpretation is unsatisfactory. Resolutions of the paradox have important implications for the interpretation of quantum mechanics [

On the other hand, in 1964 John S. Bell introduces his famous theorem [

| Φ A B ± 〉 = 1 2 ( | 0 A , 0 B 〉 ± | 1 A , 1 B 〉 ) , | Ψ A B ± 〉 = 1 2 ( | 0 A , 1 B 〉 ± | 1 A , 0 B 〉 ) . (65)

They are called Bell’s states, and also known as EPR pairs. This theorem raises an inequality, which when violated by quantum mechanics establishes the non-locality present in the entanglement of two subsystems like A and B. Besides, a posterior redefinition of this inequality due to Clauser, Horne, Shimony, and Holt (CHSH) leads to a more conducive way to experimental testing [

As we can see in Equation (65), Bell basis have two components. In particular, one of the components of | Φ A B + 〉 is | 0 A , 0 B 〉 = | 00 〉 , while the other one is | 1 A , 1 B 〉 = | 11 〉 . Applying Equation (30) to each component individually, we can calculate the spectral analysis thanks to the operator QSA-FIT

d | 00 〉 d t = ( − i m | 00 〉 ω ) [ σ z . ⊕ σ z ] | 00 〉 . (66)

We are going to need to use a new operator “ . ⊕ ” (which is easy to generalize) on the Pauli matrix σ z of Equation (17), this new operator is the only substantial difference between Equations (30) and (66); and accounts for the dimensional difference between the two equations. So that, if

A = [ a 11 a 12 a 21 a 22 ] , and B = [ b 11 b 12 b 21 b 22 ] ,

therefore,

A . ⊕ B = [ a 11 a 12 a 21 a 22 ] . ⊕ [ b 11 b 12 b 21 b 22 ] = [ [ a 11 a 12 a 21 a 22 ] + b 11 [ a 11 a 12 a 21 a 22 ] + b 12 [ a 11 a 12 a 21 a 22 ] + b 21 [ a 11 a 12 a 21 a 22 ] + b 22 ] = [ [ a 11 + b 11 a 12 + b 11 a 21 + b 11 a 22 + b 11 ] [ a 11 + b 12 a 12 + b 12 a 21 + b 12 a 22 + b 12 ] [ a 11 + b 21 a 12 + b 21 a 21 + b 21 a 22 + b 21 ] [ a 11 + b 22 a 12 + b 22 a 21 + b 22 a 22 + b 22 ] ] = [ a 11 + b 11 a 12 + b 11 a 11 + b 12 a 12 + b 12 a 21 + b 11 a 22 + b 11 a 21 + b 12 a 22 + b 12 a 11 + b 21 a 12 + b 21 a 11 + b 22 a 12 + b 22 a 21 + b 21 a 22 + b 21 a 21 + b 22 a 22 + b 22 ] (67)

Now, applying the new operator on the Pauli matrices

σ z . ⊕ σ z = [ 1 0 0 − 1 ] . ⊕ [ 1 0 0 − 1 ] = [ [ 1 0 0 − 1 ] + 1 [ 1 0 0 − 1 ] + 0 [ 1 0 0 − 1 ] + 0 [ 1 0 0 − 1 ] − 1 ] = [ [ 1 + 1 0 + 1 0 + 1 − 1 + 1 ] [ 1 + 0 0 + 0 0 + 0 − 1 + 0 ] [ 1 + 0 0 + 0 0 + 0 − 1 + 0 ] [ 1 − 1 0 − 1 0 − 1 − 1 − 1 ] ] = [ 2 1 1 0 1 0 0 − 1 1 0 0 − 1 0 − 1 − 1 − 2 ] (68)

Then, if we multiply both sides of the Equation (66) by 〈 00 | ,

〈 00 | d 00 / d t 〉 = ( − i m | 00 〉 ω ) 〈 00 | σ z . ⊕ σ z | 00 〉 . (69)

Then, if m | 00 〉 = + 1 for photons,

Δ ω max = Δ ω | 00 〉 = m | 00 〉 ω = ω = i 〈 00 | d 00 / d t 〉 〈 00 | σ z . ⊕ σ z | 00 〉 . (70)

That is, Equations (37) and (70) coincide in their form, and independently of the term of the extreme right of Equation (70), it is clear that the spectral analysis for its counterpart with m | 11 〉 = − 1 will be:

Δ ω min = Δ ω | 11 〉 = m | 11 〉 ω = − ω . (71)

Then, the bandwidth of the original entangled spins will be:

B W original = 1 2π ( Δ ω max − Δ ω min ) = 1 2π ( ω − ( − ω ) ) = 2 ω 2π = ω π . (72)

That is to say, the bandwidth of the link between the original spins is finite.

Another important concept regarding QSA-FIT comes up from Equation (37). That equation shows us the trade-off between Δ t and Δ ω , through which the change in one produces the change in the other. That is to say, this attribute of functional dependence is interchangeable. This very strong dependence from the trade-off with the mentioned characteristics ensures the projection of QSA-FIT on elements as important to Quantum Physics as is Quantum Entanglement [

Δ ω Δ t = i 〈 ψ ( t ) | Δ ψ ( t ) 〉 〈 ψ ( t ) | σ z | ψ ( t ) 〉 (73)

Now, if we consider the division of the derivative by 2 and take modulus on the right side of the equality,

Δ ω Δ t = 1 2 | i 〈 ψ ( t ) | Δ ψ ( t ) 〉 〈 ψ ( t ) | σ z | ψ ( t ) 〉 | (74)

Therefore, the trade-off becomes,

Δ ω Δ t = 1 2 | i 〈 ψ ( t ) | Δ ψ ( t ) 〉 〈 ψ ( t ) | σ z | ψ ( t ) 〉 | = 1 2 | i | | 〈 ψ ( t ) | Δ ψ ( t ) 〉 〈 ψ ( t ) | σ z | ψ ( t ) 〉 | = 1 2 { ( i ) * ( i ) } 1 / 2 = 1 2 { ( − i ) ( i ) } 1 / 2 ≥ 1 2 (75)

Although Equation (75) is similar to Equation (5) of the Fourier Uncertainty Principle from Subsection 2.4, the concept here is completely different; because, while in FFT Equation (5) tells us that a simultaneous decimation in time and frequency is impossible; in QSA-FIT this trade-off means that the shorter the change in time in the state of a signal, the higher the spectral tone that represents that change in time.

Let’s see below QSA as a procedure, therefore, given a streaming (or time sequence) of quantum states { | ψ 0 〉 , | ψ 1 〉 , | ψ 2 〉 , ⋯ , | ψ N − 1 〉 } , we will do:

1) If the time sequence is cyclic, then, we will use the modified time sequence,

{ | ψ N − 1 〉 , | ψ 0 〉 , | ψ 1 〉 , | ψ 2 〉 , ⋯ , | ψ N − 1 〉 , | ψ 0 〉 }

else-if, we will use the modified time sequence based on a | 0 〉 -padding criterion,

{ | 0 〉 , | ψ 0 〉 , | ψ 1 〉 , | ψ 2 〉 , ⋯ , | ψ N − 1 〉 , | 0 〉 }

end-if

2) According to

{ Δ ω 0 , Δ ω 1 , Δ ω 2 , ⋯ , Δ ω N − 1 }

3) Finally and carrying out classical measurements (with all the required precession), we will obtain,

{ Δ ω 0 , Δ ω 1 , Δ ω 2 , ⋯ , Δ ω N − 1 } .

Clearly, the sketch of

In this section, we present a set of two very important simulations, which expose the complete potential of the new tool. If we wanted to do the same thing through the Quantum Fourier Transform (QFT), we would have the serious inconvenience that it loses the direct and biunivocal relationship with the time, since the QFT (as we mentioned above) does not have an important attribute of functional analysis called compact support. This dysfunctionality is inhereted from its classical counterpart, i.e., FFT. This has dire consequences when trying to spectrally analyze a signal formed by quantum states in a quantum streaming way.

Therefore, we have prepared two simulations of very different characteristics: the first one has a circular temporal transition between samples (quantum states), as we can see on the left of ^{−15} hertz, which is absolutely reasonable considering the type of signal.

The second simulation consists of a very different type of signal regarding the last one. In this case, we have chosen a sequence with a completely random temporal transition in the orientation of the subsequent spins inside Bloch sphere; i.e., each quantum state which is part of the quantum signal makes a sudden jump in its direction with respect to its predecessor and its successor without the slightest commitment with a typical functional relation. In fact, both α (in red) and β (in blue) follow a random sequence of Gaussian distribution with null mean value. See the left hand side of

This work began with an extensive tour on traditional spectral techniques based on Fourier’s Theory, without compact support and completely disconnected from the link between time and frequency (this analysis included wavelet transform which sometimes has compact support), and the responsibility of each

flank with respect to final spectral components of a signal, as we can see in Section 4.2. Besides, these attributes extend to image and video [

Specifically, and as we have seen, FFT doesn’t have compact support, therefore, we say that FFT is a non-local process, while, FIT has compact support, so that, we say that FIT is a local process, with all that this implies when we apply this tool to the study of the quantum entanglement. It is worth mentioning that FIT is an important tool to assess the importance of the flanks (or edges in the case of images) in a compression process weighting in real time and sample by sample (or pixel by pixel), the importance of temporal spectral components in the final result [

Characteristics | FFT | FIT |
---|---|---|

Separability | Yes | Yes |

Compact support | No | Yes |

Instantaneous spectral attributes | No | Yes |

1D computational cost | O(N*log_{2}(N)) | O(N) |

2D computational cost | O(N^{2}*log_{2}(N)^{2}) | O(N^{2}) |

Energy treatment | Disastrous | Excellent |

Decimation | In time or frequency | Not required |

Parallelization | No | Yes |

On the other hand, and considering that when the wave function collapses, we pass from QSA to FIT, it is critical to mention that the applications of FIT are obvious for a better understanding of the Information Theory and Quantum Information Theory, in particular, Quantum Signal and Image Processing, Quantum Communications, and quantum entanglement, fundamentally. In fact, a finite bandwidth for entanglement is not a trivial or accessory subject at all. If we take into account Equation (72), the finite bandwidth takes place from a procedure based on the individual components of the Bell basis | Φ A B + 〉 , although this fact is absolutely concomitant with its possible values (and especially its signs) that can take the spin m_{s }, i.e., positive and negative, for m | 00 〉 and m | 11 〉 , respectively. The pending task is to delve deeper into the linkage between this new tool, QSA and the entanglement.

M. Mastriani thanks all the technical staff of several laboratories of the National Commission of Atomic Energy for the help they gave him in the preparation of the experiments. It is impossible to name them all here, simply, thank you all.

Mastriani, M. (2018) Quantum-Classical Algorithm for an Instantaneous Spectral Analysis of Signals: A Complement to Fourier Theory. Journal of Quantum Information Science, 8, 52-77. https://doi.org/10.4236/jqis.2018.82005