1. Introduction
1.1. Motivation
Designing unimodular waveforms with an impulse-like autocorrelation is central in the general area of waveform design, and it is particularly relevant in several applications in the areas of radar and communications. In the former, the waveforms can play a role in effective target recognition, e.g., [1-8]; and in the latter they are used to address synchronization issues in cellular (phone) access technologies, especially code division multiple access (CDMA), e.g., [9,10]. The radar and communications methods combine in recent advanced multifunction RF systems (AMRFS). In radar there are two main reasons that the waveforms should be unimodular, that is, have constant amplitude. First, a transmitter can operate at peak power if the signal has constant peak amplitude—the system does not have to deal with the surprise of greater than expected amplitudes. Second, amplitude variations during transmission due to additive noise can be theoretically eliminated. The zero autocorrelation property ensures minimum interference between signals sharing the same channel.
Constructing unimodular waveforms with zero autocorrelation can be related to fundamental questions in harmonic analysis as follows. Let
be the real numbers,
the integers,
the complex numbers, and set
. The aperiodic autocorrelation
of a waveform
, is defined as
(1)
A general problem is to characterize the family of positive bounded Radon measures F, whose inverse Fourier transforms are the autocorrelations of bounded waveforms X. A special case is when
on
and X is unimodular on
. This is the same as when the autocorrelation of X vanishes except at 0, where it takes the value 1. In this case, X is said to have perfect autocorrelation. An extensive discussion on the construction of different classes of deterministic waveforms with perfect autocorrelation can be found in [11]. Instead of aperiodic waveforms that are defined on
, in some applications, it might be useful to construct periodic waveforms with similar vanishing properties of the autocorrelation function. Let
be an integer and
be the finite group
with addition modulo n. The periodic autocorrelation
of a waveform
is defined as
(2)
It is said that
is a constant amplitude zero autocorrelation (CAZAC) waveform if each
and
![](https://www.scirp.org/html/13-5300249\66f1ef40-6f7c-44a4-8888-1ff60891c4c3.jpg)
The literature on CAZACs is overwhelming. A good reference on this topic is [3], among many others. Literature on the general area of waveform design include [12-14]. Comparison between periodic and aperiodic autocorrelation can be found in [15].
Here the focus is on the construction of stochastic aperiodic waveforms. Henceforth, the reference to waveforms shall imply aperiodic waveforms unless stated otherwise. These waveforms are stochastic in nature and are constructed from certain random variables. Due to the stochastic nature of the construction, the expected value of the corresponding autocorrelation function is analyzed. It is desired that everywhere away from zero, the expectation of the autocorrelation can be made arbitrarily small. Such waveforms will be said to have almost perfect autocorrelation and will be called zero autocorrelation stochastic waveforms. First discrete waveforms,
, are constructed such that X has almost perfect autocorrelation and for all
This approach is extended to the construction of continuous waveforms,
, with similar spike like behavior of the expected autocorrelation and
for all
Thus, these waveforms are unimodular. The stochastic and non-repetitive nature of these waveforms means that they cannot be easily intercepted or detected by an adversary. Previous work on the use of stochastic waveforms in radar can be found in [16-18], where the waveforms are only real-valued and not unimodular. In comparison, the waveforms constructed here are complex valued and unimodular. In addition, frame properties of frames constructed from these stochastic waveforms are discussed. This is motivated by the fact that frames have become a standard tool in signal processing. Previously, a mathematical characterization of CAZACs in terms of finite unit-normed tight frames (FUNTFs) has been done in [2].
1.2. Notation and Mathematical Background
Let
be a random variable with probability density function
Assuming
to be absolutely continuous, the expectation of
denoted by
is
![](https://www.scirp.org/html/13-5300249\172034c7-7d8c-4a42-a760-987852414f1e.jpg)
The Gaussian random variable has probability density function given by
The mean or expectation of this random variable is
and the variance,
is
In this case it is also said that
follows a normal distribution and is written as
The characteristic function of
at
, is denoted by
. For further properties of expectation and characteristic function of a random variable the reader is referred to [19].
Let
be a Hilbert space and let
where
is some index set, be a collection of vectors in
. Then
is said to be a frame for
if there exist constants
and
such that for any ![](https://www.scirp.org/html/13-5300249\340febdb-9d87-4e83-a911-1d7f619679c8.jpg)
![](https://www.scirp.org/html/13-5300249\b2a3cade-b997-46fc-a5b4-5c53f86d8eab.jpg)
The constants A and B are called the frame bounds. Thus a frame can be thought of as a redundant basis. In fact, for a finite dimensional vector space, a frame is the same as a spanning set. If
the frame is said to be tight. Orthonormal bases are special cases of tight frames and for these, ![](https://www.scirp.org/html/13-5300249\a6ff0229-0b8f-4a8f-92ff-194bf641a9e1.jpg)
If
is a frame for
then the map
given by
is called the analysis operator. The synthesis operator is the adjoint map
given by
![](https://www.scirp.org/html/13-5300249\cde90cc2-b92c-4df2-8d92-107da9e33377.jpg)
The frame operator
is given by
For a tight frame, the frame operator is just a constant multiple of the identity, i.e.,
where
is the identity map. Every
can be represented as
![](https://www.scirp.org/html/13-5300249\643e5169-5810-4ec4-abf7-012764e6562f.jpg)
Here
is also a frame and is called the dual frame. For a tight frame,
is just
Tight frames are thus highly desirable since they offer a computationally simple reconstruction formula that does not involve inverting the frame operator. The minimum and maximum eigenvalues of
are the optimal lower and upper frame bounds respectively [20]. Thus, for a tight frame all the eigenvalues of the frame operator are equal to each other. For the general theory on frames one can refer to [20,21].
1.3. Outline
The construction of discrete unimodular stochastic waveforms,
, with almost perfect autocorrelation is done in Section 2. This is first done with the Gaussian random variable and then generalized to other random variables. The variance of the autocorrelation is also estimated. The section also addresses the construction of stochastic waveforms in higher dimensions, i.e., construction of
, that have almost perfect autocorrelation and are unit-normed, considering the usual norm in
In Section 3 the construction of unimodular continuous waveforms with almost perfect autocorrelation is done using Brownian motion.
As mentioned in Section 1.2, frames are now a standard tool in signal processing due to their effectiveness in robust signal transmission and reconstruction. In Section 4, frames in
are constructed from the discrete waveforms of Section 2 and the nature of these frames is analyzed. In particular, the maximum and minimum eigenvalues of the frame operator are estimated. This helps one to understand how close these frames are to being tight. Besides, it follows, from the eigenvalue estimates, that the matrix of the analysis operator, F, for such frames, can be used as a sensing matrix in compressed sensing.
2. Construction of Discrete Stochastic Waveforms
In this section discrete unimodular waveforms,
, are constructed from random variables such that the expectation of the autocorrelation can be made arbitrarily small everywhere except at the origin. First, such a construction is done using the Gaussian random variable. Next, a general characterization of all random variables that can be used for the purpose is given.
2.1. Construction from Gaussian Random Variables
Let
be independent identically distributed (i.i.d.)
random variables following a Gaussian or normal distribution with mean 0 and variance
i.e.,
Define
by
(3)
where i is
. Thus, for each
,
and X is unimodular. The autocorrelation of X at
is
![](https://www.scirp.org/html/13-5300249\300a68f2-3488-422b-9234-46cad4031f97.jpg)
where the limit is in the sense of probability. Theorem 2.1 shows that the waveform given by (3) has autocorrelation whose expectation can be made arbitrarily small for all integers ![](https://www.scirp.org/html/13-5300249\25650a67-8cac-4a12-b8d0-4a29e0eedd06.jpg)
Theorem 2.1. Given
the waveform
defined in (3) has autocorrelation
such that
![](https://www.scirp.org/html/13-5300249\a143d69a-fdf2-4f68-88f9-826ce157876f.jpg)
Proof. 1) When ![](https://www.scirp.org/html/13-5300249\a00437af-bff5-46ad-a46c-a027a5c7834d.jpg)
![](https://www.scirp.org/html/13-5300249\2762fe05-db7a-45ba-aff1-af24f610ecd1.jpg)
and so ![](https://www.scirp.org/html/13-5300249\9766c269-e437-46e4-a1bf-8f51c58dcfd2.jpg)
2) Let
One would like to calculate
![](https://www.scirp.org/html/13-5300249\e4b225d0-a8a9-4eed-b451-31133a403a64.jpg)
Let
Then
Let
Then for each ![](https://www.scirp.org/html/13-5300249\a0cf020c-df8a-4b90-9db8-1825add0c802.jpg)
and
. Thus, by the Dominated Convergence Theorem [19], which justifies the interchange of limit and integration below, one obtains
![](https://www.scirp.org/html/13-5300249\183d36c0-92b4-49e1-bd5c-bb5c3200270a.jpg)
where the last line uses the fact that the
s are i.i.d. random variables. Here
is the characteristic function at
of
which is the same as that for any other
due to their identical distribution. The characteristic function at
of a Gaussian random variable with mean 0 and variance
is
Thus
![](https://www.scirp.org/html/13-5300249\8d4c57ec-6f7d-4f21-bd98-addc17649c4f.jpg)
3) When
a similar calculation for
gives
![](https://www.scirp.org/html/13-5300249\5227a239-fede-46db-bf28-5f7e2dbc0e7e.jpg)
Together, this shows that given
and any ![](https://www.scirp.org/html/13-5300249\f222464c-44b1-4954-890a-025129cf7d11.jpg)
![](https://www.scirp.org/html/13-5300249\67f9eda6-3aec-4708-8dab-e36725218d6c.jpg)
which indicates that the expectation of the autocorrelation at any integer
can be made arbitrarily small depending on the choice of
. □
As shown in Theorem 2.1 the expectation of the autocorrelation can be made arbitrarily small but this is not useful unless one can estimate the variance of the autocorrelation. Denoting the variance of
by
one has
![](https://www.scirp.org/html/13-5300249\92487279-8964-4989-a180-47aacc019f13.jpg)
First consider ![](https://www.scirp.org/html/13-5300249\7d739658-2656-4048-9949-946404b2e58f.jpg)
![](https://www.scirp.org/html/13-5300249\0863f564-ef4d-4aea-a896-36bf46abc8d3.jpg)
By applying the Lebesgue Dominated Convergence Theorem one can bring the expectation inside the double sum to get
![](https://www.scirp.org/html/13-5300249\4bcdd7a3-1935-4066-b33b-c5f8db81a65d.jpg)
The sum
(4)
may have cancelations among terms involving n with terms involving m. Suppose that for a fixed n and m there are
indices that cancel in each of the four sums in (4). Due to symmetry, the same number i.e.,
of terms will cancel in each sum. Depending on n and m,
lies between 0 and k, i.e.,
For the sake of making the notation less cumbersome,
will from now on be written as
. When
If
or
then
Each sum in (4) has k terms and
of these get cancelled leaving
terms. One can re-index the variables in (4) and write it as
![](https://www.scirp.org/html/13-5300249\2887b33f-d56b-48a1-b211-96b66565c737.jpg)
where the sign depends on whether
is less than or greater than
Thus
.
Due to the independence of the
s, this means
![](https://www.scirp.org/html/13-5300249\9e44b9eb-22f5-4c95-94ff-7d5ce14a0d62.jpg)
The minimum is attained for
and the maximum at
Thus
![](https://www.scirp.org/html/13-5300249\2235f109-89ea-41cd-abbe-001ba6106eed.jpg)
and
![](https://www.scirp.org/html/13-5300249\ba7dde95-25e5-4d12-a21e-8d02b7a9b9ca.jpg)
This gives
![](https://www.scirp.org/html/13-5300249\641d3991-4835-40f3-8052-ec3f849164f4.jpg)
A similar calculation can be done for
Thus for ![](https://www.scirp.org/html/13-5300249\d9dc1368-4de7-454d-baea-681d7bd7bfa1.jpg)
![](https://www.scirp.org/html/13-5300249\0e44afeb-677f-498d-889c-5ecc74a43ce2.jpg)
2.2. Generalizing the Construction to Other Random Variables
So far the construction of discrete unimodular zero autocorrelation stochastic waveforms has been based on Gaussian random variables. This construction can be generalized to many other random variables. The unimodularity of the waveforms is not affected by using a different random variable. The following theorem characterizes the class of random variables that can be used to get the desired autocorrelation.
Theorem 2.2. Let
be a sequence of i.i.d. random variables with characteristic function
Suppose that the probability density function of the
s is even and that
goes to 0 as t goes to infinity. Then, given
the waveform
given by
![](https://www.scirp.org/html/13-5300249\8bf3386f-647a-4143-b60e-9c606f8289a2.jpg)
has almost perfect autocorrelation.
Proof. Since the density function of each
is even this means that the characteristic function is real valued [19]. Following the calculation in the proof of Theorem 2.1, the expected autocorrelation of
for
is
![](https://www.scirp.org/html/13-5300249\cd07f6e4-726b-48f6-833d-39cb3841192d.jpg)
and this goes to zero with
by the hypothesis. □
Example 2.3. Suppose the
s follow a bilateral distribution that has density
with
and characteristic function
. Then for
,
![](https://www.scirp.org/html/13-5300249\ec214985-cff9-4a44-9376-e604400799e6.jpg)
and this can be made arbitrarily small with
.
In the same way as was done in the Gaussian case, for ![](https://www.scirp.org/html/13-5300249\e1476ba9-6d29-479e-8a41-871ddfad6dee.jpg)
![](https://www.scirp.org/html/13-5300249\c72e91f8-8847-4052-963a-309a6f910d67.jpg)
and
![](https://www.scirp.org/html/13-5300249\9f0901d7-e893-4092-b8a0-9ab6d34a41cd.jpg)
Thus
![](https://www.scirp.org/html/13-5300249\b6c6be23-73ae-4163-8eae-e0ad8d90d5f7.jpg)
Example 2.4. Suppose that the
s follow the Cauchy distribution with density function
Note thatdisregarding the constant
this is the characteristic function of the random variable considered in Example 2.3. The characteristic function of the
s is now
the same as the distribution function in Example 2.3. For ![](https://www.scirp.org/html/13-5300249\792ae557-d932-4b37-8c9e-cc4849a4ac5a.jpg)
![](https://www.scirp.org/html/13-5300249\c8d98be8-8a44-4b99-b255-799d5ec9f862.jpg)
which can be made arbitrarily small with
Also,
![](https://www.scirp.org/html/13-5300249\0f0587e0-c3bd-4224-8fdb-361165bb1cd1.jpg)
2.3. Higher Dimensional Case
Here one is interested in constructing waveforms
,
It is desired that
has unit norm and the expectation of its autocorrelation can be made arbitrarily small. One way to construct
is based on the construction of the one dimensional example given in Section 2.1. This is motivated by the higher dimensional construction in the deterministic case [2]. As before,
is a sequence of i.i.d. Gaussian random variables with mean zero and variance
. Next, one defines
. The waveform
is then defined as
(5)
In this case, the autocorrelation is given by
(6)
where
is the usual inner product in
. The length or norm of any
is thus given by
![](https://www.scirp.org/html/13-5300249\0d38d79a-b31f-4a51-be29-1f57dc368ae6.jpg)
From (5),
![](https://www.scirp.org/html/13-5300249\e10357b8-6be9-45ef-9b28-6f77d9df71dd.jpg)
Thus the
s are unit-normed. The following Theorem 2.5 shows that the expected autocorrelation of v can be made arbitrarily small everywhere except at the origin.
Theorem 2.5. Given
the waveform
defined in (5) has autocorrelation
such that
![](https://www.scirp.org/html/13-5300249\4b8fdaf6-7e5a-4de3-8f7c-8e4f22ab7f57.jpg)
Proof. As defined in (6),
![](https://www.scirp.org/html/13-5300249\3092883d-0b71-4428-ab71-2739085e304b.jpg)
When ![](https://www.scirp.org/html/13-5300249\3ad8953c-4838-4aa1-b6b9-825073586350.jpg)
![](https://www.scirp.org/html/13-5300249\0e4037f4-f7c6-4a36-ab1c-18dd128c6727.jpg)
Thus,
![](https://www.scirp.org/html/13-5300249\a020cca9-c014-4817-aef8-b58dfc0e6a52.jpg)
For
due to (5),
![](https://www.scirp.org/html/13-5300249\00740377-2c99-4595-8e73-b8e6b217e8b1.jpg)
Consider ![](https://www.scirp.org/html/13-5300249\ea4d83e6-26bf-4e45-b3db-a06dea7a34cd.jpg)
![](https://www.scirp.org/html/13-5300249\aa5f965f-b7bc-4bd0-a8c8-097d4db134c3.jpg)
Similarly, for
, one gets
□
Thus the waveform
as defined in this section is unit-normed and has autocorrelation that can be made arbitrarily small.
Remark 2.6. As in the one dimensional construction, it is easy to see that here too the construction can be done with random variables other than the Gaussian. In fact, all random variables that can be used in the one dimensional case, i.e., ones satisfying the properties of Theorem 2.2, can also be used for the higher dimensional construction.
2.4. Remark on the Periodic Case
It can be shown that the periodic case follows the same nature as the aperiodic case. The sequence
is defined in the same way as in Section 2.1, i.e.,
![](https://www.scirp.org/html/13-5300249\c3d6e3c9-f3d9-4777-961e-0deb6e53048d.jpg)
where
Following the definition given in (2), when ![](https://www.scirp.org/html/13-5300249\5ac8278c-f661-4e4e-bbde-222f76dadab7.jpg)
![](https://www.scirp.org/html/13-5300249\82204c2e-8606-4222-9af5-d8341b05ed48.jpg)
When
the expectation of the autocorrelation is
![](https://www.scirp.org/html/13-5300249\df577c6c-3b20-4ab9-ab68-109ce0f9d189.jpg)
For ![](https://www.scirp.org/html/13-5300249\8fdcc45d-9ed7-4e7d-a42e-703a02ab999f.jpg)
![](https://www.scirp.org/html/13-5300249\2aa991f0-dbdd-443c-b536-80c3454c54bd.jpg)
where one uses the fact that the
s are i.i.d.. A similar calculation for negative values of k suggests that the autocorrelation can be made arbitrarily small, depending on
for all non-zero values of k. Also, as in the aperiodic case, this result can be obtained for random variables other than the Gaussian.
3. Construction of Continuous Stochastic Waveforms
In this section continuous waveforms with almost perfect autocorrelation are constructed from a one dimensional Brownian motion.
For a continuous waveform
, the autocorrelation
can be defined as
(7)
Let
be a one dimensional Brownian motion. Then
satisfies
• ![](https://www.scirp.org/html/13-5300249\2f56cfb3-1614-4b5c-87f0-a7696068344b.jpg)
• ![](https://www.scirp.org/html/13-5300249\dde8b84c-bab9-4369-873c-66e137729b3f.jpg)
• ![](https://www.scirp.org/html/13-5300249\9b20f8d1-cc9f-411a-ae80-afef2bbd5033.jpg)
are independent random variables.
Theorem 3.1. Let
be the one dimensional Brownian motion and
be given. Define
by
![](https://www.scirp.org/html/13-5300249\f10b3718-99c5-4c44-a9ec-c3c5796cce43.jpg)
and
Then the autocorrelation of
satisfies
![](https://www.scirp.org/html/13-5300249\d07647b7-0663-4df2-8e08-b0e01d2c4516.jpg)
Proof. We would like to evaluate
![](https://www.scirp.org/html/13-5300249\53bd09ac-04c5-42e7-9469-7e20c21efcd8.jpg)
Let
and let ![](https://www.scirp.org/html/13-5300249\1515f535-4a29-407b-ab65-41f54569c033.jpg)
![](https://www.scirp.org/html/13-5300249\8f5ed9e6-72a4-4d9e-b82d-ce2e518fe7d8.jpg)
Thus each
is integrable and further
Let
;
. Then
Therefore, by the Dominated Convergence Theorem, and properties of Brownian motion and characteristic functions, one gets
![](https://www.scirp.org/html/13-5300249\372c8054-dea2-49c6-afee-98ee2629842c.jpg)
which can be made arbitrarily small based on
Similarly,
4. Connection to Frames
Consider the mapping
given by
(8)
where
as defined in Section 2.1.
Let
and consider the set
of
unit vectors in
. The matrix
![](https://www.scirp.org/html/13-5300249\49609ad0-d99a-4216-9ed0-e21afa0f0f6e.jpg)
is the matrix of the analysis operator corresponding to
The frame operator of
is
i.e.,
.
The entries of
are given by
and for ![](https://www.scirp.org/html/13-5300249\ade74572-14d4-4831-8992-2ea639082b8c.jpg)
![](https://www.scirp.org/html/13-5300249\c2334a0a-3b75-43c8-990e-6b86a4027911.jpg)
Note that since
is self-adjoint,
It is desired that V emulates a tight frame, i.e,
is close to a constant times the identity, in this case,
times the identity. Alternatively, it is desirable that the eigenvalues of
are all close to each other and close to
. In this case, due to the stochastic nature of the frame operator, one studies the expectation of the eigenvalues of
.
4.1. Frames in ![](https://www.scirp.org/html/13-5300249\d515577c-a294-497e-a08b-31328aa5e368.jpg)
This section discusses the construction of sets of vectors in
as given by (8). The frame properties of such sets are analyzed. In fact, it is shown that the expectation of the eigenvalues of the frame operator are close to each other, the closeness increasing with the size of the set. The bounds on the probability of deviation of the eigenvalues from the expected value is also derived. The related inequalities arise from an application of Theorem 4.1 [22] below.
Theorem 4.1. (Azuma’s Inequality) Suppose that
is a martingale and
![](https://www.scirp.org/html/13-5300249\bd75d791-01db-4131-a77f-e02965ba6b94.jpg)
almost surely. Then for all positive integers
and all positive reals ![](https://www.scirp.org/html/13-5300249\1d76554e-b90e-43b2-b51d-d2a6c36a2e48.jpg)
![](https://www.scirp.org/html/13-5300249\0db0f9e7-a6d5-4f70-baf1-5df5110980f8.jpg)
Consider
vectors in
i.e.,
in (8). Then
and
(9)
Considering the set
the frame operator of V is
![](https://www.scirp.org/html/13-5300249\7a42276f-f0ca-4f84-9b6e-8e41da35a0ff.jpg)
(10)
Theorem 4.2. 1) Consider the set
where the vectors
are given by (9). The minimum eigenvalue,
and the maximum eigenvalue,
of the frame operator of V satisfy
(11)
where ![](https://www.scirp.org/html/13-5300249\412ef1fd-5a2d-481d-860e-ea0560c06ed0.jpg)
2) The deviation of the minimum and maximum eigenvalue of
from their expected value is given, for all positive reals
by
![](https://www.scirp.org/html/13-5300249\08e1a0d4-5e1d-4cae-82e4-16aacd1cd582.jpg)
![](https://www.scirp.org/html/13-5300249\c9f8a6a3-5d24-4da7-92e6-db231559d1ee.jpg)
Proof. 1) The frame operator of
is given in (10). The eigenvalues of
are
and
where
![](https://www.scirp.org/html/13-5300249\ffc9dc65-eef4-4687-a60a-b8bf9f11d02c.jpg)
Let
![](https://www.scirp.org/html/13-5300249\cbb9b3a5-e5ad-40e3-8d28-b6cfe1350a3d.jpg)
so that
![](https://www.scirp.org/html/13-5300249\52a6f896-d22c-4a2b-8395-a87c25d57464.jpg)
Note that for
and
are independent and so
Also, since the
s are i.i.d. and the characteristic function of the
s is symmetric,
![](https://www.scirp.org/html/13-5300249\ad13564a-ece2-4684-b2cf-25a0cba96aa4.jpg)
![](https://www.scirp.org/html/13-5300249\1481a4c6-4c41-4731-b4e3-d90c29ed4f67.jpg)
and therefore
![](https://www.scirp.org/html/13-5300249\49f8ce63-67df-46ce-b61e-7c8385c55908.jpg)
Thus
![](https://www.scirp.org/html/13-5300249\7b5f7c9b-ff80-457f-818d-1376a2b04eb3.jpg)
The above estimate on
implies that
(12)
Since
and
, (12) implies
![](https://www.scirp.org/html/13-5300249\f5c6bb28-618a-464a-864a-03090b63f601.jpg)
Noting that
and
one finally gets, after setting ![](https://www.scirp.org/html/13-5300249\f28de82c-959d-4da7-b3bd-cc9717b4e6bb.jpg)
![](https://www.scirp.org/html/13-5300249\cbeca47d-c88b-4b54-a2e1-ea66e5a7d837.jpg)
2) To prove 2) we use the Doob martingale and Azuma’s inequality [22]. For
let
Here the Doob martingale is the sequence
where
![](https://www.scirp.org/html/13-5300249\06d18ab6-6b83-42db-a8e3-b7d1e5975fc3.jpg)
and
![](https://www.scirp.org/html/13-5300249\c081b672-b049-4458-b99a-a21f5b56691e.jpg)
Note that
and
Also,
![](https://www.scirp.org/html/13-5300249\d983f702-1b92-46bd-b0f7-bdfbc507029d.jpg)
So by Azuma’s Inequality (see Theorem 4.1)
![](https://www.scirp.org/html/13-5300249\234f0f62-4e5c-4c60-93fa-9e4dddb41909.jpg)
Since
this means
![](https://www.scirp.org/html/13-5300249\176b3d17-202b-4f26-b15c-60d2c72a3117.jpg)
and
.
Going back to the actual frame operator
, whose eigenvalues are
and
the following estimates hold.
![](https://www.scirp.org/html/13-5300249\987e68fe-e860-4c79-b3c5-eec0213dd6e9.jpg)
and
![](https://www.scirp.org/html/13-5300249\0dcda170-5fa4-4ea7-9797-f1ecbe359fca.jpg)
Corollary 4.3. The eigenvalues of the frame operator considered in Theorem 4.2 satisfy, for all positive reals r,
![](https://www.scirp.org/html/13-5300249\d6b906c0-e68f-455d-b3c2-d45cbfe58849.jpg)
![](https://www.scirp.org/html/13-5300249\e1aced0e-0c04-4013-a04d-2a6592672be6.jpg)
where ![](https://www.scirp.org/html/13-5300249\32423650-402f-4e78-93f5-0a6ab326e78f.jpg)
Proof. Due to part 1) of Theorem 4.2
![](https://www.scirp.org/html/13-5300249\035fcb8b-b1bf-4872-82cc-36301efdb967.jpg)
This implies, as a consequence of part 2) of Theorem 4.2, that
![](https://www.scirp.org/html/13-5300249\347f8103-8d9f-460e-a123-fd5a8c1ef653.jpg)
In a similar way, from part 1) of Theorem 4.2,
![](https://www.scirp.org/html/13-5300249\78128e18-225b-4826-bb3b-e0ed6c97801b.jpg)
which implies, as a consequence of part 2) of Theorem 4.2, that
![](https://www.scirp.org/html/13-5300249\bf2ceace-c3df-4596-998d-966ec39c7184.jpg)
Remark 4.4. In Theorem 4.2, as M tends to infinity, the value of
in (11) can be made arbitrarily small based on the choice of
This in turn implies that the two eigenvalues can be made arbitrarily close to each other, with
On the other hand, for a fixed M, as
tends to zero, (11) becomes
![](https://www.scirp.org/html/13-5300249\9fc4e82e-021d-4d80-b2b3-a3f060be67f6.jpg)
4.2. Frames in
![](https://www.scirp.org/html/13-5300249\ec784961-64c8-4fbd-8cf8-0e107060b631.jpg)
For general d and M, in order to use existing results on the concentration of eigenvalues of random matrices [23, 24], a slightly different construction of the frame needs to be considered. Let
be i.i.d. random variables following a Gaussian distribution with mean zero and variance
It can be shown that
![](https://www.scirp.org/html/13-5300249\4f21a1cf-cc22-4b8c-b82f-f7c3a026f604.jpg)
and the variance
![](https://www.scirp.org/html/13-5300249\0bc2f5ec-033b-4003-9dfa-655ccb219442.jpg)
One can define the following two dimensional sequence. For ![](https://www.scirp.org/html/13-5300249\4566b9ce-fe16-47a3-bbd9-caa6b764ca87.jpg)
![](https://www.scirp.org/html/13-5300249\4eef051a-b683-4eda-b8c0-dc25bc22f613.jpg)
Consider the mapping
given by
(13)
As before, let
and consider the set of
unit vectors
in
. The frame operator of this set is
.
Let
(14)
so that
The matrix A has entries with mean zero and variance
According to results in [23], if
as
, then the smallest and largest eigenvalues of
converge almost surely to
and
, respectively.
Theorem 4.5. Let
be the singular values of the matrix A given by (14). Then the following hold.
1) Given
there is a large enough d such that
(15)
2)
(16)
where
and
are universal positive constants.
Proof. Let
be the mapping that associates to a matrix
it largest singular value. Equip
with the Frobenius norm
![](https://www.scirp.org/html/13-5300249\51e8b7bc-9c1d-4760-abb3-6bc2d0b634a8.jpg)
Then the mapping
is convex and 1-Lipschitz in the sense that
![](https://www.scirp.org/html/13-5300249\d135a1c7-46ae-49eb-9c51-f2a8e27e8e14.jpg)
for all pairs
of d by M matrices [24].
We think of A as a random vector in
The real and imaginary parts of the entries of
are supported in
Let P be a product measure on
. Then as a consequence of the concentration inequality (Corollary 4.10, [24]) we have
![](https://www.scirp.org/html/13-5300249\23d13bf8-7ca1-4d6c-95a4-9ab62baa2572.jpg)
where
is the median of
. It is known that the minimum and maximum singular values of A converge almost surely to
and
, respectively, as d, M tend to infinity and
. As a consequence, for each
and M sufficiently large, one can show that the medians belong to the fixed interval
which gives
![](https://www.scirp.org/html/13-5300249\cfb7f777-d703-48d0-8975-5f6b17945622.jpg)
For the smallest singular value we cannot use the concentration inequality as used for
since the smallest singular value is not convex. However, following results in [25] (Theorem 3.1) that have been used in [26] in a similar situation as here, one can say that whenever
, where
is greater than a small constant,
where
and
are positive universal constants. □
Remark 4.6. Note that the square of the singular values of A are the eigenvalues of
and so the estimates given in (15) and (16) give insight into the corresponding deviation of the eigenvalues of the frame operator
.
Remark 4.7. (Connection to compressed sensing) The theory of compressed sensing [27-29] states that it is possible to recover a sparse signal from a small number of measurements. A signal
is k-sparse in a basis
if x is a weighted superposition of at most k elements of
. Compressed sensing broadly refers to the inverse problem of reconstructing such a signal x from linear measurements
with
, ideally with
. In the general setting, one has
, where
is a
sensing matrix having the measurement vectors
as its columns, x is a length-M signal and y is a length-d measurement.
The standard compressed sensing technique guarantees exact recovery of the original signal with very high probability if the sensing matrix satisfies the Restricted Isometry Property (RIP). This means that for a fixed k, there exists a small number
, such that
![](https://www.scirp.org/html/13-5300249\de154f28-c324-4e7a-a9ab-895537fe6632.jpg)
for any k-sparse signal x. By imitating the work done in [26] (Lemmas 4.1 and 4.2), it can be shown, due to Theorem 4.5, that matrices A of the type given in (14) satisfy
![](https://www.scirp.org/html/13-5300249\3d8ff131-f223-4e67-9c51-1902f830910e.jpg)
Figure 1. Behavior of the condition number of the frame operator with increasing size of the frame; ε = 0.0001, d = 3, σ = 1.
the RIP condition and can therefore be used as measurement matrices in compressed sensing. These matrices are different from the traditional random matrices used in compressed sensing in that their entries are complexvalued and unimodular instead of being real-valued and not unimodular.
Example 4.8. This example illustrates the ideas in this subsection. First consider M = 5 and d = 3 so that there are 5 vectors in
Taking from a normal distribution with mean 0 and variance
a realization of the matrix
is
.
Then taking
is
![](https://www.scirp.org/html/13-5300249\51b846bd-cdbf-4340-92c1-15b4cf8637d2.jpg)
The condition number, ratio of the maximum and minimum eigenvalues, of
As the number of vectors M is increased, the condition number gets closer to 1. Figure 1 shows the behavior of the condition number with the increase in the number of vectors.
5. Conclusion
The construction of discrete unimodular stochastic waveforms with arbitrarily small expected autocorrelation has been proposed. This is motivated by the usefulness of such waveforms in the areas of radar and communications. The family of random variables that can be used for this purpose has been characterized. Such construction been done in one dimension and generalized to higher dimensions. Further, such waveforms have been used to construct frames in
and the frame properties of such frames have been studied. Using Brownian motion, this idea is also extended to the construction of continuous unimodular stochastic waveforms whose autocorrelation can be made arbitrarily small in expectation.
6. Acknowledgements
The author wishes to acknowledge support from AFOSR Grant No. FA9550-10-1-0441 for conducting this research. The author is also grateful to Frank Gao and Ross Richardson for their generous help with probability theory.
NOTES