Optimum Probability Distribution for Minimum Redundancy of Source Coding

In the present communication, we have obtained the optimum probability distribution with which the messages should be delivered so that the average redundancy of the source is minimized. Here, we have taken the case of various generalized mean codeword lengths. Moreover, the upper bound to these codeword lengths has been found for the case of Huffman encoding.


Introduction
Any message that brings a specification in a problem which involves a certain degree of uncertainty is called information and it was Shannon [1] who named this measure of information as entropy.In coding theory, the operational role of entropy comes from the source coding theorem which states that if H is the entropy of the source letters for a discrete memoryless source, then the sequence of source outputs cannot be represented by a binary sequence using fewer than H binary digits per source digit on the average, but it can be represented by a binary sequence using as close to H binary digits per source digit on the average as desired.To be clearer, let us consider the discrete source S that emits symbols 1


. The aim of source coding is to encode the source using an alphabet of size D , that is, to map each symbol i x to a codeword i c of length i l expressed using the D letters of the alphabet.It is known that if the set of lengths i l satisfies Kraft's [2] inequality then there exists a uniquely decodable code with these lengths, which means that any sequence 1 2 i i in c c c  can be decoded unambiguously into a sequence of symbols 1 2 i i in x x x  .In this respect, Shannon [1] proved the first noiseless coding theorem for the uniquely decipherable code in the form of following inequality where is a Shannon's entropy and is the mean codeword length.
Later, Campbell [3] and Kapur [4] proved the source coding theorems for their own exponentiated mean co-deword length in the form of following inequalities respectively, where Recently, Parkash and Kakkar [6] introduced two mean codeword lengths given by Further, the authors provided two source coding theorems which show that for all uniquely decipherable codes, the mean codeword lengths   , L   and   L  satisfy the relation: respectively where is a Kapur's [4] two parameter additive measure of entropy and is measure of entropy developed by Parkash and Kakkar [6].This is to emphasize that in the entire literature of source coding theorems, one can observe that the mean codeword length is lower bounded by the entropy of the source and it can never be less than the entropy of the source but can be made closer to it.This phenomenon provides the idea of absolute redundancy which is the number of bits used to transmit a message minus the number of bits of actual information in the message, that is, the mean codeword length minus the entropy of the source.The objective of the present communication is to minimize this redundancy in order to increase the efficiency of the source encoding.For this purpose we have made use of the concept of escort distribution as follows: , , , n p p p p   is the original distribution, then its escort distribution is given by , , , n P P P P   where for some parameter 0   .Many researchers including Harte [7], Bercher [8,9], Beck and Schloegl [10] etc. used this distribution in their respective findings.
The aim of the present paper is to obtain the optimum probability distribution with which the source should deliver messages in order to minimize the absolute redundancy.To obtain our goal, we have taken into consideration the above mentioned generalized mean codeword lengths.Moreover, the upper bound to these codeword lengths has been found for Huffman [4] encoding.

Optimum Probability Distribution to Minimize Absolute Redundancy
Let us assume that for discrete source S that emits symbols 1 2 , , , n x x x  with probability distribution , , , n p p p p   , the codewords i c having lengths , 1,2, ,  , have been obtained using some encoding procedure on noiseless channel.Further, we assume that entropy of the source is , therefore, the average redundancy of the source code is given by where In order to minimize the average redundancy, we resort to the following theorem: Theorem 1: The optimum probability distribution that minimizes the absolute redundancy  

2)
Proof: To minimize the redundancy, we need to minimize To prove this, we first of all, find the extremum of   , we consider the Lagrangian given by where 0 Substituting (2.6) in (2.4), we get We see that , , , n f p p p  , we see that it has minimum value for 0 1 Thus, the minimum value is given by   Again, the necessary condition for the construction of uniquely decipherable codes is given by Therefore, from (2.9), we have   , , , 0 .
Similarly, if we consider the codeword length   L  which satisfies the relation     , then the absolute redundancy of the source code in this case is given by , , , , , , log Theorem 2. The optimum probability distribution that minimizes the absolute redundancy   , , , 1 where 0   is a Lagrange's multiplier.
For an extremum, let 0, 1, 2, , , , , n g P P P  reaches its minimum value when 1 , 1,2, , and is given by Again in this case also, if the source is Huffman [11] encoded, then the probabilities are given by , 1,2, , .if the source is encoded using Huffman procedure.

Proof:
The exponentiated codeword length   , L   can be written in the following form where where   , , , We need to find the extremum of   , (as the source is encoded using Huffman Procedure).
For this purpose, we first of all, find the extremum of , , , n h l l l  and then use the fact that   , L   is minimum or maximum depending upon the value of pa- rameter  .So, we consider the Lagrangian given by   where 0 , We see that , , , n h l l l  has minimum value for 0 1    and maximum for 1 has minimum value for 0 1    and maximum for 1   and conse- quently, observing the exponentiated mean codeword length   , L   , we see that it has maximum value for 0 1 Thus, the maximum value is given by Theorem 4. The mean codeword length   L  is upper bounded by log D n , that is, if the source is encoded using Huffman procedure.

Proof:
The exponentiated codeword length   L  can be written in the following form as .
We need to find the extremum of   (as the source is encoded using Huffman Procedure).So, we consider the Lagrangian given by   Note-I: For the case of Campbell's codeword length L  , we have from (1.3), . So, the average redundancy of the source code in this case is given by , , , .The absolute redundancy in the case of Campbell's [3] mean codeword length is the same as in case of exponentiated mean codeword length   , L   developed by Parkash and Kakkar [6] as given in (2.1).Thus, we see that similar results as proved in theorem (2.1) and theorem (2.3) hold for Campbell's case also.
Note-II: Absolute redundancy when we use Kapur's [4] mean codeword length is given by , , , ., , , n J p p p  , we see that it has minimum value for 0 1 The minimum value is given by  

Theorem 3 .
will find the upper bound on the codeword lengths   , L   and   L  when the source is Huff- man encoded.The exponentiated codeword length   , L   satisfies the following inequality

Theorem 5 :
The optimum probability distribution that minimizes the absolute redundancy of the source with entropy   R p  and mean codeword length L  is given by 1 .29) [8] Kapur's[8]mean codeword length L  satisfies the following inequality if the source is encoded using Huffman procedure.Proof: Proceeding as in Theorem 2.3, we can prove the Theorem 6.