^{1}

^{*}

^{1}

The water-filling algorithm enables an energy-efficient OFDM-based transmitter by maximizing the capacity of a frequency selective fading channel. However, this optimal strategy requires the perfect channel state information at the transmitter that is not realistic in wireless applications. In this paper, we propose opportunistic error correction to maximize the data rate of OFDM systems without this limit. The key point of this approach is to reduce the dynamic range of the channel by discarding a part of the channel in deep fading. Instead of decoding all the information from all the sub-channels, we only recover the data via the strong sub-channels. Just like the water-filling principle, we increase the data rate over the stronger sub-channels by sacrificing the weaker sub-channels. In such a case, the total data rate over a frequency selective fading channel can be increased. Correspondingly, the noise floor can be increased to achieve a certain data rate compared to the traditional coding scheme. This leads to an energy-efficient receiver. However, it is not clear whether this method has advantages over the joint coding scheme in the narrow-band wireless system (e.g. the channel with a low dynamic range), which will be investigated in this paper.

Wireless communication takes place over multi path fading channels [1-3]. Typically, the signal is transmitted to the receiver via a multiple of paths with different delays and gains, which induces Inter-Symbol Interference (ISI). To mitigate the ISI effect with a relatively simple equalizer in the wireless receiver, Orthogonal Frequency Division Multiplexing (OFDM) has become a fruitful approach to communicating over such channels [2,4,5]. The key idea of OFDM is to divide the whole transmission band into a number of parallel ISI-free sub-channels, which can be easily equalized by a single-tap equalizer via using scalar division [6,7]. The information is transmitted over those sub-channels. Each OFDM sub-channel has its gain expressed as a linear combination of the dispersive channel taps. When the sub-channel has nulls (deep fades), reliable detection of the symbols carried by these faded sub-channels becomes difficult.

With the perfect Channel State Information (CSI) at the transmitter, the maximum data rate of a frequency selective fading channel can be achieved by the waterfilling power allocation algorithm [

Without CSI at the transmitter, the transmitted power is equally allocated to each sub-channel. To achieve reliable communication, error correction codes are usually employed in OFDM systems [8-10]. Over a finite block length, coding jointly yields a smaller error probability than that can be achieved by coding separately over the subchannels at the same rate [^{1}. For this par coding scheme, the requirement of the noise floor is even higher to have all received packets decodable.

In a single-user scenario, the noise mainly comes from the hardware, e.g. the RF front and the Analog-to-Digital Converter (ADC) in the receiver. Given a practical wireless system, the noise floor is almost determined. In that case, the maximum data rate of the wireless channel is dependent on the dynamic range of the channel. The higher dynamic range means a lower data rate. Without CSI at the transmitter, we have two approaches to increasing the data rate over a channel with a high dynamic range.

• One is to reduce the noise floor in the RF front and the ADCs. That leads to the high power consumption in the receiver. For the RF front, its power consumption increases by 3 dB if the noise floor decreases by 3 dB [

• The other one is to reduce the dynamic range of the channel by discarding a part of the channel in deep fading. Instead of decoding all the information from all the sub-channels, we only recover the data via the strong sub-channels. Just like the water-filling principle, we increase the data rate over the stronger subchannels by sacrificing the weaker ones. In such a case, the total data rate over a frequency selective fading channel can be increased. Correspondingly, the noise floor can be increased to achieve a certain data rate compared to the traditional coding scheme. That leads to an energy-efficient receiver.

Without CSI at the transmitter, the joint coding scheme does not allow us to give up any part of the channel as it treats each sub-channel equally important. Therefore, we transmit each packet over a single sub-channel. We take

and it can be decoded successfully when its Signal-toNoise Ratio (SNR) is equal toor larger than. We assume that the maximum noise floor is if we want all the packets to be decoded. In such a case, the total data rate is equal to.However, from this figure, we can see that the weakest sub-channel costs a large part of the dynamic range. By discarding this sub-channel, the dynamic range of the channel is reduced to around 8dB. To compensate for this discarded sub-channel, we use a relatively higher code rate to encode each packet that can be decoded if With this scheme, the total data rate is equal to. In this example, if, the total data rate is increased. Given the same noise floor, if the reduced dynamic range (i.e. 11 dB in this example). Otherwise, there is no gain from discarding the weak sub-channels. Obviously, is larger than in this example. Given the same data rate (i.e.), discarding this sub-channel allows us to increase the noise floor in this example. Equivalently, the power consumption in the receiver is decreased.

Without CSI at the transmitter, the consequence of discarding the weak sub-channels is the loss of packets that are transmitted over those sub-channels. Two solutions can help us to compensate for it. One is to retransmit the lost packets. If the channel changes fast, this approach becomes not efficient and may cost more than that we gain from sacrificing the weak sub-channels. Also, the feedback channel is required, which is expensive in the wireless system. The other approach is to use erasure codes. In such a case, we treat the lost packets as erasures. With the assistance of a certain erasure code, we can achieve reliable communication with an energy-efficient receiver by discarding part of the channel in deep fading. Hence, we propose an energy-efficient error correction scheme based on erasure codes. To apply it to the OFDM-based wireless system, we divide a block of source bits into a set of packets. By treating each packet as a unit, they are encoded by an erasure code. Each erasure-encoded packet is protected by an error correction code that makes the noisy wireless channel behave like an erasure channel. Afterwards, each packet is transmitted over a sub-channel. Thus, multiple packets are transmitted simultaneously, using frequent division multiplexing. With the CSI at the receiver, the receiver discards the packets that are transmitted over the subchannels in deep fading and only decodes the packets with high energy. Erasure codes assist us to reconstruct the original file by only using the survived packets. Therefore, this scheme is called opportunistic error correction.

As mentioned earlier, the joint coding scheme works better than the separate coding over frequency selective fading channels, but it is not straightforward clear whether the opportunistic error correction can endure the higher level of noise floor than the joint coding. In [^{2 }gain of around 8.5 dB over Channel Model A [

The paper is organized as follows. Opportunistic error correction is first depicted. We explain why this new method is suitable for OFDM systems and how it works. In section IV-A, we describe the system model by showing how we apply this novel scheme in OFDM systems. After that, we compare its performance with FEC layers from WLAN systems over aTGn^{3 }channel [

OFDM enables a relative easy implementation of wireless receivers over frequency selective fading channels [

• Mode I is to transmit a packet over a single subchannel. In this case, the coding is done separately over all the sub-channels.

• Mode II is to transmit a packet over all the subchannels. With this method, the coding is performed jointly over all the sub-channels.

Both transmission modes have advantages and disadvantages. Using Mode I, the receiver can predict whether the received packet is decodable since each sub-channel is modeled as a flat-fading channel. The packets transmitted over the sub-channel with low energy can be discarded without going through the whole receiving chain. Correspondingly, the processing power can be reduced. This is a desirable feature for a battery-powered receiver, which cannot be achieved by using Mode II. But Mode I endures a lower Noise Floor (NF) than Mode II to achieve the same quality of communication. As stated earlier, lower NF means higher power consumption in the wireless receiver which is not favorable by a battery-powered receiver.

To have a receiver with both energy-efficient features (i.e. a low processing power from Mode I and a high noise floor from Mode II), we propose opportunistic error correction which combines the separate coding scheme and the joint coding scheme together. Opportunistic error correction is a cross coding scheme. Via erasure codes, source bits are encoded jointly over all the sub-channels; then, each erasure-encoded packet is encoded individually over a sub-channel by error correction codes. This is different from the traditional coding scheme (i.e. the separate coding scheme or the joint coding scheme).

Opportunistic error correction is specially designed for OFDM systems. It is based on erasure codes. Any erasure codes can be applied in it. In this paper, we use fountain codes [

At the receiver side, the channel is first estimated. With the channel knowledge, the receiver makes a decision about which packets are to be decoded. We assume that fountain-encoded packets can go through the error correction decoding. Packets only survive if they succeed in the error correction decoder. The fountain decoder can reconstruct the original file by col-

lecting enough packets. The number of fountain-encoded packets required at the receiver is slightly larger than the number of source packets [

where is the percentage of extra packets and is called the overhead. For high throughput, is expected to be as small as possible. However, fountain codes (e.g. Luby-Transform (LT) codes [

The performance of opportunistic error correction depends on its parameters (i.e. the rate of erasure codes and error correction codes, the number of discarded subchannels). Given a set of parameters, whether it performs better than the traditional coding scheme depends on the dynamic range of the channel, which will be analyzed in the next section.

Consider a single-user OFDM system with equally spaced orthogonal sub-channels shown in

The channel output can be expressed as:

where is the number of channel taps, is the channel taps and is the transmitted symbol. is i.i.d uniform-distributed random variables with zero mean and a variance of 1, so according to the central limit theorem. The elements in vector

are mutual independent. From the central limit theorem, can be modeled as a Gaussian-distributed random variable with zero mean and a variance of. In this paper, we normalize the channel energy to 1 (i.e.). So,.

The received symbol is defined by:

where is the channel noise in the time domain. We assume that is an additive white gaussian noise with zero mean and a variance of. Due to the additional cyclic prefix in each OFDM symbol, the linear convolution in Equation (2) can be considered as a cyclic convolution [

where is the fading over the sub-channel defined by:

is the noise in the frequency domain and expressed as:

According to the central limit theorem, is a Gaussian distributed random variable with zero mean and a variance of. Thus, each sub-channel has the same noise floor, but its SNR is different:

where is the energy of the sub-channel and defined by:

and is defined by:

Error correcting codes can be applied to mitigate the effect of deep fades. Different coding scheme requires different level of NF (i.e.) to decode successfully. Assume that source packets are encoded by a coding scheme then transmitted over the system as shown in

• Coding I is to encode them by a Low-Density Parity Check (LDPC) code [

• Coding II is to encode them by the same LDPC code as Coding I. But Coding II is a joint coding scheme as each packet is transmitted over all the sub-channels.

• Coding III is to encode them by opportunistic error correction based on LT codes. We define the rate of LT codes as. Each fountain-encoded packet is protected by a LDPC code with a rate of and transmitted over a single sub-channel. To have the same rate as Coding I and II, the number of discarded sub-channels can be expressed as:

where, and.

We assume that the LDPC code used in Coding I and II needs to achieve successful decoding (i.e.) over the AWGN channel. For Coding III, we assume that each fountain-encoded packet can be received correctly if its

.

Because

,.

For the convenience in the analysis, we sort the sub-channels by its energy:

(11)

The dynamic range of a wireless channel is defined as:

1) Coding I: To have all the packets decodable, the maximum NF for Coding I should be:

2) Coding II: The maximum NF for Coding I is not as straightforward as Coding I. As the joint coding scheme employs the fact that the strong sub-channels can help the weak sub-channels, we use to classify the weak and strong sub-channels. In such a case, means the weak sub-channels and means the strong sub-channels. Besides, we assume that Coding II can decode the received packets correctly (i.e.) if the number of weak sub-channels is no more than. So, the maximum NF for Coding II is:

As, we have. In other words, Coding II (i.e. the joint coding scheme) does not perform worse than Coding I (i.e. the separate coding scheme). If, we have. In the case of (i.e. flat-fading channel or low dynamic range) where, we have.

3) Coding III: With this scheme, each fountain-encoded packet can be received correctly if its SNR is not smaller than. Because weak sub-channels can be discarded, the maximum NF for Coding III is expressed as:

The key idea of Coding III is to exchange the code rate of error correction codes with the number of sub-channels to be discarded. If the price paid by using a relatively higher rate of error correction codes can be compensated by the reduced dynamic range, opportunistic error correction (i.e. Coding III) does not perform worse than the traditional coding schemes (i.e. Coding I and II). Equivalently,.if

.

obviously, and if. That might hold for. In such a case, there is no reason to apply opportunistic error correction in wireless applications. In the next section, we will search in the simulation results.

In this section, we analyze the performance of opportunistic error correction in the simulation. In [

The opportunistic error correction layer is based on fountain codes which have been explained in the above section. This proposed cross layer can be applied in any OFDM-based wireless systems. In this paper, the IEEE 802.11a system is taken as an example of OFDM systems.

In

At the receiver side, we assume that synchronization and channel estimation are perfect in the simulation. If the SNR of the sub-channel is equal to or above the threshold, the received fountain-encoded packet will go through the LDPC decoding, otherwise it will be discarded. This means that the receiver is allowed to discard low-energy sub-channels (i.e. packets) to lower the processing power consumption. After the LDPC decoding, the CRC checksum is used to discard the erroneous packets. As only packets with a high SNR are processed by the receiver, this will not happen often. When the receiver has collected enough fountain-encoded packets, it starts to recover the source data.

In this section, we compare three FEC schemes in simulation as follows:

• FEC I: LDPC codes at with interleaving from the IEEE 802.11n standard [

• FEC II: fountain codes with the (175,255) LDPC code [

• FEC III: fountain codes with the (175,255) LDPC code plus 7-bit CRC using the transmission Mode II.

Three FEC schemes are simulated as function of the dynamic range and/or the bandwidth BW by transmitting 1000 bursts of data (i.e. around 100 million bits) over the TGn channel. Each burst consists of 583 source packets with a length of 168 bits. With the same code rate of, source packets are encoded by FEC I, II and III, respectively. Afterwards, they are mapped into QAM-16 symbols before the OFDM modulation.

For the case of FEC II and III, each burst is encoded by a LT code (designed by using parameters [^{4} of the transmitted packets. In FEC II, we transmit one packet per sub-channel. In this case, (i.e. 21% of 48 data sub-channels). In FEC III, we transmit each fountain-encoded packet over all the data sub-channels. Similar to FEC II, we are allowed to have a 21% packet loss in FEC III.

In total, we compare them under 6 situations: the flatfading channel (i.e. the AWGN channel), dB, dB, dB, dB and dB.

FEC II starts to show its advantage over the joint coding scheme (i.e. FEC I and III) when is higher than 10 dB.

• Comparing to FEC I at a BER of or lower, FEC II has a SNR gain of around 1 dB at dB, around 6 dB at dB, around 10.5 dB at dB and around 13.5 dB at dB. From

• Comparing to FEC III at the error-free quality, FEC II has a SNR gain of 1 dB at dB, 3 dB atdB, 7 dB at dB and11 dB at dB. The performance of FECIII degrades (i.e. a SNR loss of 4 dB) as increases by10 dB. That is less than the case of FEC I (i.e. a SNR loss of 6 dB at every 10 dB increase in).

Therefore, we can conclude that fountain codes make error correction coding schemes more robust to the variation of.

As mentioned before, the key point of opportunistic error correction (i.e. FEC II) is to exchange the code rate of the used error correction codes with the number of discarded sub-channels. Simulation results conclude that there is no benefit to have this tradeoff when the dynamic range of the channel is within 10 dB. The profit starts fordB and increases with.

In this part, we compare them over the TGn channel with different bandwidth: 5 MHz, 10 MHz and 20 MHz.

In general, FEC II and III performs better than FEC I at BW = 5 MHz, 10 MHz and 20 MHz. The reason behind is as follows. Due to the variation of the channel, a burst data encounters several channels with different. For the case of FEC II and III, if some part of fountain-encoded packets are lost more than expected ina channel with, fountain codes still can recover the original data when the other part of fountain-encoded packets is lost less than expected in the channel with. However, this does not apply to FEC I.

The C++ simulation results in the above section have shown the performance of opportunistic error correction in comparison with the joint coding scheme (i.e. FEC I and III) over the TG n channel with different and BW, respectively. C++ simulation, with its highly accurate double-precision numerical environment, is on the one hand a perfect tool for the investigation of the algorithms. On the other hand, many imperfections of the real-world are neglected (e.g. perfect synchronization and channel estimation are assumed in Section IV, which does not happen in the real-world). So, simulation may show a too optimistic receiver performance. In this section, we evaluate its performance in practice to investigate whether opportunistic error correction is more robust to the real-world’s imperfections.

The practical measurements are done in the experimental communication test bed designed and built by Signals and Systems Group [

The data is generated offline in C++. The generation consists of the random source bits selection, the FEC encoding and the digital modulation as we depict in Section IV-A. The generated data is stored in a file. A server software in the transmit PC uploads the file to the Ad link PCI-7300Aboard^{5} which transmits the data to DAC (AD9761)^{6} via the FPGA board. After the DAC, the base band analog signal is up converted to 2.3 GHz by a Quadrature Modulator (AD8346)^{7} and transmitted using aconical skirt monopole antenna.

The reverse process takes place in the receiver. The received RF signal is first down converted by a Quadrature Demodulator (AD8347)^{8}, then filtered by the 8th order low-pass Butterworth analog filter to remove the aliasing. The base band analog signal is quantized by the ADC (AD9238)^{9} and stored in the receive PC via the Ad link PCI board.

The received data is processed offline in C++. The receiver should synchronize with the transmitter and estimate the channel using the preambles and the pilots, which are defined in [

Measurements are carried out in the corridor of Signals and Systems Group, located at the 9th floor of Building Hogekamp in University of Twente, the Netherlands. The measurement setup is shown in

In the simulation depicted in section IV, these FEC schemes can be compared by using the same source bits. Different channel bits can go through the same random frequency selective channel. However, itdoes not apply in the real environment. The wireless channel is timevariant even when the transmitter and the receiver are stationary (e.g. the moving of elevator with the closed door can affect the channel). Hence, we should compare them by using the same channel bits.

Because not every stream of random bits is a codeword of a certain coding scheme, it is not possible to derive its corresponding source bits from any sequence of random bits, especially for the case of FEC II and FEC III. Fortunately, the decoding of FEC I is based on the parity check matrix. Any stream of random bits can have its unique sequence of source bits with its corresponding syndrome matrix. The receiver can decode the received data based both on the parity check matrix and the syndrome matrix. So, FEC I can use the same channel bits with FEC II. In such a case, they can be compared under the same channel condition (i.e. channel fading, channel noise and the distortion caused by the hardware.). Therefore, we only compare the joint coding scheme from the IEEE 802.11n standard (i.e. FEC I) with opportunistic error correction (i.e. FEC II) in there al world.

In the measurements, FEC I and II are compared with the same code rate (i.e.). More than 600 blocks of source packets are transmitted over the air. Each block consists of 97944bits. Source bits are encoded by FEC II. The encoded bits are shared by FEC I as just explained. Afterwards, they are mapped into QPSK symbols^{10} before the OFDM modulation.

Each measurement corresponds to the fixed position of the transmitter and the receiver. It is possible that some measurements might fail in decoding. Due to the lack of a feedback channel in the testbed, no retransmission can occur. In this paper, we assume that the measurement fails if the received data per measurement has a BER higher than by using FEC I. For the case of FECII, if the packet loss is more than 21% as expected, we assume that the measurement fails.

In total, 89 measurements have been done. There are 7 blocks of data transmitted in each measurement. The estimated of the channel over those 89 measurements distributes in the range of around 50% of the measurements have dB; around 39% of the measurements have dB; around 10% of the measurements have dB; around 1% of the measurements have dB.

FEC II succeeds in all the measurements but that does not happen to FEC I.

Both FEC I and II succeed in 77 measurements, where the SNR of the received signal ranges from 12 dB to 25 dB. In order to investigate whether FEC II can endure higher level of noise floor (i.e. lower SNR) than FEC I, we add extra white noise to the received signal in the software. It is difficult to have the same SNR range in all measurements, so we evaluate their practical performance by analyzing the statistical characteristics of meas-

urements.

Here, we define as the minimum SNR for FECI to achieve a BER of or lower and as the minimum SNR for FEC II to have the error-free quality for each measurement. The difference between and is expressed as:

If (i.e.), FEC I needs higher SNR (i.e. lower level of noise floor) to achieve than FEC II at BER = 0. is for the opposite case.

• In the case of dB, around 80% of is in the range of [10,12] dB and around 85% of is in the range of [9,10] dB, as shown in

• In the case of dB, both and have a wider range with respect to dB, as we can see in

is in the range of [11,16] dB while around 87% of lies in the same range.

• In the case of dB, and have different range to have successful measurements. lies in the range of [16,18] dB while is in the range of [12,15] dB. That means that FEC I always needs a higher SNR to achieve a BER of or lower than FEC II to have the error free quality. On average, is around 17.4 dB, is around 13.6 dB and is around 3.8 dB. In addition, the average BER of FEC I is around With. For the measurements at dB, we can say that FEC II has a SNR gain of around 3.8 dB to have no bit errors with respect to FEC I at.

As mentioned earlier, FEC I fails in the measurement at dB but FEC II survives. By adding extra white noise, FEC II still have the error-free quality at SNR = 14dB. In general, FEC II performs better than FEC I in practice. To have successful measurement, their minimum SNR difference becomes larger as increases. That is also shown in the simulation.

Opportunistic error correction based on erasure codes is especially beneficial for OFDM systems to have an energy-efficient receiver. The key idea is to lower the dynamic range of the channel by a discarding part of the channel with deep fading. By transmitting one packet over a single sub-channel, erasure codes can reconstruct the original file by only using the packets transmitted over the sub-channels with high energy. Correspondingly, the wireless channel can have wire-like quality with the high mean and low dynamic range, leading to an increase of the noise floor. Correspondingly, the power consumption of wireless receivers can be reduced.

Opportunistic error correction consists of erasure codes and error correction codes. In this paper, we choose LT codes to encode source packets; then, each fountain-encoded packet is protected by the (175,255) LDPC code plus 7-bit CRC. To investigate the performance difference between the joint coding scheme (i.e. the LDPC code from the IEEE 802.11n standard) and this cross coding scheme, we compare them over the TG n channel with different dynamic range in the simulation under the condition of the same code rate. Opportunistic error correction performs better in the simulation than the joint coding scheme if dB. Their performance difference becomes larger as increases. Besides, the performance of the joint coding scheme mainly depends on. When dB, opportunistic error correction does not have any performance loss as increases. Furthermore, we compare them in the experimental communication test bed. Measurement results show that opportunistic error correction works better than the joint coding scheme in any range of. In other words, this cross coding scheme is more robust to the imperfections of the practical systems.

The authors acknowledge the Dutch Ministry of Economic Affairs under the IOP Generic Communication— Senter Novem Program for the financial support.