^{*}

^{*}

^{*}

This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.

The need for data compression is growing day by day because of the fast development of data intensive applications that place a heavy demand on information storage and to meet the high data rate requirements of bandwidth transmission systems such as multimedia [

Compression is the coding of the data to minimize their representation by removing the redundancy present in them, keeping only sufficient information that can be effectively used in the decompressing phase to reconstruct the original data. The compression of images is motivated by the economic and logistic needs to conserve space in storage media and save bandwidth in communication.

AC is the most powerful technique for statiscal lossless encoding that has attracted much attention in the recent years. It provides more flexibility and better efficiency than the celebrated Huffman coding does [

[

In addition, the concept RLE used to record the number of repeating bits is applied for simplicity and efficiency.

In this paper, we propose a new method of lossless image compression based on combining AC with the RLE. The proposed method is compared to both static and adaptive model.

The rest of this paper is organized as follows. In Section 2, we briefly discuss the AC. Section 3 presents the RLE encoding technique. In Section 4, we describe the proposed method for lossless image compression. In Section 5, we discuss the experiments and the results. Finally, the conclusion of this paper will be discussed in Section 6.

AC [9-12] is a statistical coder and it very efficient for data compression. In addition, AC has been widely used in many standards including JPEG2000, JPIG2 and H. 264/AVC.

The aim of AC is to define a method that provides code words with an ideal length. Like for every other entropy coder, it is required to know the probability for the appearance of the individual symbols.

AC is the most efficient method to code symbols according to the probability of their occurrence. The average code length is very close to the possible minimum given by information theory.

The AC assigns an interval to each symbol whose size reflects the probability for the appearance of this symbol. The code word of a symbol is an arbitrary rational number belonging to the corresponding interval.

The entire set of data is represented by a rational number, which is always placed within the interval of each symbol. With data being added, the number of significant digits rises continuously.

The principle of AC is the input stream. It is read symbol by symbol and appends more to the code each time a symbol is input and processed. To understand this method it is useful to imagine the resulting code as a number in the range (0, 1) that is the range of real numbers from 0 to 1 not including one. The first step is to calculate or at least estimate the frequency of occurrence of each symbol.

Most bitmap file formats such as TIFF, BMP and PCX use RLE as a data compression algorithm. It was created to compress any type of data regardless of the information it contains.

While this method is of little interest for compressing ordinary text, it can be a surprisingly effective for compressing binary images or low levels, containing little information, or with a disproportion of white/black pixels.

In addition it has excellent performance where a concentration of energy of the image is made. The RLE technique here is simply to describe the image line by line or column by column, by encoding only the tracks of constant color.

RLE is one of the oldest, the simplest and most used methods. Its whole secret is to identify and eliminate redundancies in the encoding of information in a more compact form.

Its principle is based on counting the number of identical patterns in the data range studied and is used to reduce the physical size of a repetition of string. This repeated string called passage is thus typically encoded into two parts. The first which is the number of repetitions, is called the counter and the second is the value of the passage.

A new packet is generated whenever the character changes or the number of characters exceeds the maximum value of the counter. Thus, the more the patterns are repeated, the better the results are.

RLE is used in image compression either as a standalone tool or as an element of a larger processing chain. Compressing an image using RLE is based on the observation that if we select a pixel in the image at random, there is a good chance that the neighbors of the pixel will have the same intensity value. This is particularly true for binary images. For grayscale and color images, RLE is normally used following the transform and quantization step.

The idea behind the RLE is that if a data item d occurs at n consecutive times in the input stream, compression can be achieved by replacing the n occurrences with the single pair nd. The n denotes the number of consecutive occurrences or the run length of data item d, thus the name run length encoding.

The