Legendre Wavelet Neural Networks for Power Amplifier Linearization

In this paper, a novel technique for power amplifier (PA) linearization is presented. The Legendre wavelet neural networks (LWNN) is first utilized to model PA and inverse structure of the PA by applying practical transmission signals and the gradient descent algorithm is applied to estimate the coefficients of the LWNN. Secondly, this technique is implemented to identify and optimize the coefficient parameters of the proposed pre-distorter (PD), i.e., the inversion model of the PA. The proposed method is most efficient and the pre-distorter shows stability and effectiveness because of the rich properties of the LWNN. A quite significant improvement in linearity is achieved based on the measured data of the PA characteristics and out power spectrum has been compared.


Introduction
Recently, a lot of efforts have been made to improve the linearity of the PA [1]- [3].Among all linearization techniques, digital baseband pre-distortion is one of the most cost effective.Such of the stated problem mainly exists two difficulties which need to be overcome.Firstly, the behaviour of the PA has to be approximated by an appropriate method.Secondly, the implementation of the pre-distorter requires finding a method to invert this model.For copping with this issue, the polynomial especially orthogonal polynomial model is widely used to predict and design the performance of the PA and its inversion model because of its simplicity and ease of implementation [3].From system identification point of view, finding appropriate parameters of these polynomial models is the common problem of an unknown reference system that has to be identified based on the observations of its input samples [ ] x q and its output samples [ ] y q .Wavelet neural networks which combine neural networks with wavelet bases is proposed for nonlinear system identification and this approach has become more popular lately [4]- [6].In the present article, a novel technique of the PA model and the PD is presented by utilizing the LWNN [6] and the gradient descent algorithm to analyze simple and commonly employed models (the Wiener model and the Hammerstein model) consisting of a linear filter and a static non linearity [7]- [11].In addition to having the good properties of the neural networks, the LWNN can converge quickly and give high precision with reduced network size because of the rich properties of Legendre wavelet such as orthogonality, piecewise adaptive polynomial approximation, compact support and vanishing moments [4].For clarity of the LWNN technique, this paper focuses on the static nonlinearity, i.e., memory less techniques.An efficient PD is designed by using the LWNN and the gradient descent algorithm.
The approach presented in this paper has advantages: the LWNN is characterized by small network size and polynomial activation functions and fast learning speed; Legendre wavelet bases are expressed in closed form; adaptive piecewise approximation at different decomposition level; overcome the stagnation in the nonlinear system identification.
The organization of the paper is as follows.In Section 2, we relate the LWNN and illustrate its benefit in PA modeling compared with the conventional polynomial model.In Section 3, we describe the inversion structure of PA model and formulate a pre-distortion linearization algorithm with the LWNN and the gradient descent.The linearization is described by using the output response of the power amplifier and generating inverse data to combine with the input frequency signal.Numerical examples are presented and a quite improvement in linearity is achieved.Finally, the conclusion is drawn in Section 4.

PA Model with LWNN
In this section, we first introduce the Legendre wavelet and the structure of the LWNN and describe its properties which can offer an efficient tool for the digital pre-distortion linearization.In a second step, applying practical transmission signals (2013 China Post-Graduate Mathematic Contest in Modeling), the different polynomial bases including of the Legendre wavelet are used to PA model and compared with respect to the normalized mean squared error (NMSE) of the LWNN by using gradient descent algorithm.

Legendre Wavelet
Let ( ) k L x denote the Legendre polynomial of degree k , which is defined as follows: as a space of piecewise polynomial functions, which forms an orthonormal basis and the approximation of a function as follows: ( ) ( ) and the approximate estimation satisfies [4] ( ) ( ) which demonstrates the approximation error exponentially convergences with the resolution level and the order of Legendre wavelet base.

Legendre Wavelet Neural Networks
The based on the properties of good learning ability of the neural networks, combining Legendre wavelet with neural networks, the LWNN can developed [6], which is constructed based on the wavelet transform theory and is an alternative method of feed forward neural networks for identifying nonlinear systems.Generally, the LWNN has a three-layered network structure that consists of input, Legendre wavelet and output layers depicted in Figure 1 at resolution level n .Input layer: The input data l x denote the signal on the subinterval nl x I ∈ and are directly transmitted into the Legendre wavelet layer, where 0,1, 2, , 2 1 . Legendre wavelet layer: Legendre wavelet base is adopted as the activation function of the wavelet nodes connected with the input data is expressed as where k is the order of Legendre polynomial and l z is the output of the LWNN.Output layer: The output of the LWNN using a linear combination of Legendre wavelet is represented as (1).The output z is determined by both the tune able weights , k nl s and the Legendre wavelet functions ( ) Once created, the LWNN has the capability of approximating continuous nonlinear mapping to any high resolution.The simple structure and polynomial activation functions of the LWNN display a much higher level of generalization and shorter computing time as compared to a three-layered feed forward neural networks.Similar to the MLP neural networks, a number of algorithms can be used to train the LWNN, such as BP, quick propagation and Leven berg-Marquardt algorithms, etc.A good training algorithm can shorten the training time while achieving a better accuracy.In the present study, the standard gradient descent algorithm is used as the learning criterion for the LWNN.

PA Model
More generally, let's assume that the system PA is memory less and described by the operator {} N ⋅ .Then in the discrete time case, the optimal PD described by the operator {} with some real valued constant gain 0 g > .Where the [ ] x q and [ ] y q are the discrete-time input and output signals, respectively, and 1, 2, , q Q =  .In this paper, we utilize the LWNN to approximate the two operators by applying practical transmission signals.In the literature [1], it is illustrated how the signal distortion introduced by a memory less PA can be approximated by two continuous functions which are closely related  (3), we obtain the approximations of the AM/AM and AM/PM conversions by using the LWNN such that ( ) ( ) ( ) ( ) We would like extract the PA parameters { } , k nl s and { } , k n l d ′ ′ ′ by using the gradient descent algorithm rather than the least squares algorithm which could experience a numerical instability problem.Due to the similar means to approximate the AM/AM and the AM/PM characteristics, here we explicitly discuss the approximation of the AM/AM characteristic.
In this work, to give a quantitative measure of the approximation accuracy, we use the normalized mean squared error (NMSE) as metric to choose the optimum dimensions of the PA model.Where [ ] y q is the measured PA response, [ ] z q is the PA model response and [ ] ( ) z q y t = , Q is the signal length.Let the decomposition level 1 n = and piecewise polynomial order 4 p = , the input test signal is used to characterize the PA, which can be obtained as shown in Figure 2.
Figure 2 illustrates the LWNN technique for the pre-distortion is most efficient because of its lower order piecewise approximation and required a smaller number of iterations than other neural networks.

Pre-Distortion Using LWNN
Now, we describe the inversion of the PA model.In order to reduce computation load, an efficient method for the inversion model is obtained, the input [ ] x q and the output [ ] y q are interchanged.Hence, the direction of the signal flow in the branch is reversed, the rest remains unchanged.Figure 3 shows the inversion of the PA model with different order.
The remaining task necessary to line a rize the out of the PA by digital pre-distortion is to find an adequate system, i.e. the PD which pre-processes the transmit signal in such a way that the overall response of the cascade PD-PA is linear.Most of the pre-distortion architectures for PA linearization are based on indirect learning which is more flexible and robust than the direct learning architecture [7], which is depicted in the Figure 4.
Utilizing this architecture, a complete digital pre-distortion linearization technique is developed based on the LWNN by using the gradient descent algorithm.Initialize positions based on the coefficients of the inversion of the PA model presented in this section.These expansion coefficients are then used as reference system for the adaptive system identification.After adaptation, the estimated parameters approximate the post-inverse of the reference system.Figure 5 shows the simulation results of the pre-distortion linearization with the LWNN.
To evaluate the effectiveness of the proposed linearization technique in suppressing spectral re-growth, we compare the power spectral density of the PA output without linearization, with memory lessp re-distortion.The identification performance of the digital pre-distortion scheme is evaluated by measured I/O data signal.The PSD described in Figure 6 are estimated.In Figure 6 the green line shows the line a rized spectrum at the PA output and the dashed line depicts the PA response without pre-distortion.It can be concluded from the obtained results that the LWNN method demonstrates stability and effectiveness.

Conclusion
In this paper, the PA model and its inversion are firstly established to describe the behavior of the transmitter by using LWNN and compared to standard pre-distortion method trough simulation using the practical transmission signal.Secondly, the LWNN and gradient descent algorithm are applied to the pre-distortion linearization.The new technique proposed in this paper enjoys inherent implementation and cost advantages over other linearization techniques, as previously discussed.In another work, we shall describe the memory structures and realize the full potential of the LWNN method.
the AM/AM conversion of the PA, it describes the (nonlinear) relation between the amplitude (or power) of the input baseband signal ( ) BB x t and the amplitude (or power) of the output baseband signal denotes the AM/PM conversion which characterizes the phase offset of the out- put baseband signal depends on the input amplitude.Now, given input

Figure 2 .
Figure 2. Estimate NMSE of the PA model using LWNN.

Figure 3 .
Figure 3. Estimate NMSE of inversion of the PA.

Figure 5 .
Figure 5. PA model and inversion by using LWNN.

Figure 6 .
Figure 6.Comparison of the output spectrum by using LWNN.