Comparison and Design of Decoder in B3g Mobile Communication System

Turbo code has been shown to have ability to achieve performance that is close to Shannon limit. It has been adopted by various commercial communication systems. Both universal mobile telecommunications system (UMTS) TDD and FDD have also employed turbo code as the error correction coding scheme. It out-performs convolutional code in large block size, but because of its time delay, it is often only used in the non-real-time service. In this paper, we discuss the encoder and decoder structure of turbo code in B3G mobile communication System. In addition, various decoding techniques, such as the Log-MAP, Max-log-MAP and SOVA algorithm for non-real-time service are deduced and compared. The performance results of de-coder and algorithms in different configurations are also shown.


Introduction
A turbo code can be thought as a refinement of the concatenated encoding structure and an iterative algorithm for decoding the associated code sequence [1].The codes are constructed by applying two or more component codes to different interleaved versions of the same information sequence [2] [3].In literature [4], for any single traditional code, the final step at the decoder yields hard-decision decoded bits (or, more generally, decoded symbols).In order for a concatenated scheme such as a turbo code to work properly, the decoding algorithm should not limit itself to pass hard decisions among the decoders [5].To best exploit the information learned from each decoder, the decoding algorithm must effect an exchange of soft rather than hard decisions [6].For a system with two component codes, the concept behind turbo decoding is to pass soft decisions from the output of one decoder to the input of the other, and to iterate this process several times to produce better decisions [7].

Principle of Iterative Decoding
For the first decoding iteration of the soft input/soft output decoder in Figure 1, one generally assumes the binary data to be equally likely, yielding an initial a priori LLR value of L(d) =0 for the third term in [8] Consider the two-dimensional code (product code) depicted in Figure 2. The configuration can be described as  a data array made up of k 1 rows and k 2 columns.Each of the k 1 rows contains a code vector made up of k 2 data bits and n 2 -k 2 parity bits.Similarly, each of the k 2 columns contains a code vector made up of k 1 data bits and n 1 -k 1 parity bits.The various portions of the structure are labeled d for data, p h for horizontal parity (along the rows), and p v for vertical parity (along the columns).Additionally, there are blocks labeled L eh and L ev , which house the extrinsic LLR values learned from the horizontal and vertical decoding steps, respectively.Notice that this product code is a simple example of a concatenated code.Its structure encompasses two separate encoding steps, horizontal and vertical.The iterative decoding algorithm for this product code proceeds as follows: 1) Set the a priori information 2) Decode horizontally, obtain the horizontal extrinsic information as shown below: 4) Decode vertically, obtain the vertical extrinsic information as shown below: 6) If there have been enough iterations to yield a reliable 7 decision, go to step 7; otherwise, go to step 2; 7) The soft output is:

B3G Mobile System Decoder Design
The algorithms for decoders can be divided into two categories: (1) Log-MAP: Log-Maximum A Posteriori, (2) SOVA: Soft Output Viterbi Algorithm.

Log-MAP Algorithm
Before explaining the MAP decoding algorithm, we assume some notations: ( ) s e starting stage of the edge e.
( ) s e ending stage of the edge e. d k (e) the information word containing k 0 bits.u i stands for individual information bits.
x k (e) codeword containing n 0 bits.We assume here that the received signal is y k =x k +n (transmitted symbols + noise).
The metric at time k is A k (.) and B k (.) is forward and backward path metrics.

A s p s e s y
A s e M e k N

B s p y s e s B s e M e k N
Suppose the decoder starts and ends with known states If the final state of the trellis is unknown: The joint probability at time k is:

Output Viterbi Algorithm (SOVA)
There are two modifications compared to the classical Viterbi algorithm.One is the path metric modified to account the extrinsic information.This is similar to the metric calculation in Log-MAP algorithm.The other is the algorithm modified to calculate the soft bit.For each state in the trellis the metric ( ) M s is calculated for both merging paths, the path with the highest metric is selected to be the survivor, and for the state (at this stage) a pointer to the previous state along the surviving path is stored.The information to give L(u k |y) is stored, the difference i s k  between the discarded and surviving path.
The binary vector containingδ+1 bits, indicating last δ+1 bits that generated the discarded path.After ML path is found the update sequences and metric differences are used to calculate L(u k |y).For each bit ML k u in the ML path, we try to find the path merging with ML path that had compared to the  is less than current , we set = .
min k

Comparison of the Decoding Algorithms
SOVA the ML path is found.The recursion used is identical to the one used for calculating of α in Log-MAP algorithm.Along the ML path hard decision on the bit u k is made.L(u k |y) is the minimum metric difference between the ML path and the path that merges with ML path and is generated with different bit value u k .In L(u k |y) calculations accordingly to Log-MAP one path is ML path and other is the most likely path that gives the different u k .In SOVA the difference is calculated between the ML and the most likely path that merges with ML path and gives different u k .This path but the other may not be the most likely one for giving different u k .Compared to Log-MAP output (SOVA does not have bias), the output of SOVA is just much noisier.The SOVA and Log-MAP have the same output.The magnitude of the soft decisions of SOVA will either be identical of higher than those of Log-MAP.If the most likely path that gives the different hard decision for u k , has survived and merges with ML path the two algorithms are identical.If that path does not survive the path on what different u k is made is less likely than the path which should have been used.
The forward recursion in Log-MAP and SOVA is identical but the trace-back depth in SOVA is either less than or equal to the backward recursion depth.Log-MAP is the slowest of the three algorithms, but has the best performance among these three algorithms.In our TD-CDMA simulation, we implemented the Log-MAP and SOVA decoder to get best performance (with Log-MAP) or fastest speed (with SOVA).Figure 3 gives the performance comparison between Log-MAP and SOVA and it shows that Log-MAP is 1.2dB better than SOVA at the BER of 10 -2 , with the code block size of 260 bits and 7 iterations.
The code block size or interleaver size is also affect the decoding performance, the BER performance comparison of various code block size is shown in Figure 4.The longer is the code block, the better is the performance, but the big code block size makes more computing time, and the computing complexity increases by exponential.Turbo decoder is the iterative decoder, and decoding performance is also impacted by the number of iteration.Figure 5 shows that the BER performance improves as the number of turbo iteration increased.However, the required computation also increases.It is shown that no significant performance improvement is observed after the sixth iteration.

Conclusions
We illustrate the turbo decoder principle, and the derivation of Log-MAP, and SOVA algorithms.Log-MAP algorithm is shown to achieve the best performance with good complexity tradeoff.SOVA algorithm has less computation complexity with about 1.2dB performance degradation compared with Log-MAP at the BER of 10 -2 .We also demonstrate that the performance of turbo code is directly proportional to the interleaver size and number of iteration in the turbo decoder.Finally, it is also shown that the performance is affected by the scale of the input soft bits power but the effect is negligible when the scaling factor is larger than 0.8.
. The channel pre-detection LLR value, Lc(x) , is measured by forming the logarithm of the ratio o 1 f  and 2  , seen in Figure l.The output ˆ) of the Figure 2 decoder is made up of the LLR from the detector, , and the extrinsic LLR output., representing knowledge gleaned from the decoding process.As illustrated in Figure 2, for iterative decoding the extrinsic likelihood is fed back to the decoder input, to serve as a refinement of the a priori value for the next iteration.

Figure 2 .
Two-dimensional product code e y x e p s e s e p x s e p y x probability of receiving y k if x k was transmitted.

2 Figure 3 .Figure 4 .Figure 5 .
Figure 3. BER performance of Log-MAP algorithm VS SOVA algorithm  i = k,…, (k +δ), for each merging path in that set we calculate back to find out which value of the bit u k generated this path.If the bit u k in this path is