^{1}

^{1}

The present communication offers a method to determine an unknown discrete probability distribution with specified Tsallis entropy close to uniform distribution. The relative error of the distribution obtained has been compared with the distribution obtained with the help of mathematica software. The applications of the proposed algorithm with respect to Tsallis source coding, Huffman coding and cross entropy optimization principles have been provided.

After the publication of his first paper “A mathematical theory of communication”, Shannon [

with the convention

Shannon’s main focus was related with the type of communication problems related with engineering sciences but as the field of information theory progressed, it became clear that Shannon ’s entropy was not the only feasible information measure. Indeed, many modern communication processes, including signals, images and coding systems, often operate in complex environments dominated by conditions that do not match the basic tenets of Shannon ’s communication theory. For instance, coding can have a non-trivial cost functions, codes might have variable lengths, sources and channels may exhibit memory or losses, etc. Post-Shannon developments of non-parametric entropy, it was realized that generalized parametric measures of entropy can play a significant role to deal with the prevailing situations, since these measures introduce flexibility in the system and also helpful towards maximization problems.

An extension to the Shannon entropy proposed by Renyi [

The Renyi entropy offers a parametric family of measures, from which the Shannon entropy is accessible as a special case when

Another information theorist Tsallis [

When q → 1, the Tsallis entropy recovers the Shannon entropy for any probability distribution. The Tsallis [

Information theory provides a fundamental performance limits pertaining to certain tasks of information pro- cessing, such as data compression, error-correction coding, encryption, data hiding, prediction, and estimation of signals or parameters from noisy observations. Shannon [

In Section 2, we have provided an algorithm to find a discrete distribution closer to uniform distribution with specified Tsallis [

Tsallis introduced the generalized q-logarithm function is defined as

which for

which becomes the exponential function for

The q-logarithm satisfies the following pseudo additive law

It is to be noted that the classical power and the additive laws for the logarithm and exponential do no longer hold for (4) and (6). Except for

The Tsallis entropy (3) can be written as an expectation of the generalized q-logarithm as

Let us suppose that there are

Multiplying and dividing by

Defining

Since

where

Rearranging the terms in Equation (8) gives

Thus,

where

The maximum value of Tsallis entropy subject to natural constraint, that is,

which is obtained at uniform distribution

So, we have

In a similar way,

Hence,

The objective of present paper is to find

smaller than

entropy for these normalized variables,

To finish the selection of the

We also use relation (14) to find probability distribution

1) For given

2) Pick the solution

3) Generate the random number

4) Repeat the above three steps for

5) For

6) Take

7) Use equation (14) to get probability distribution

Note: Before specifying the value of parameter q and entropy

Let

Note that the values

equation of (13) and hence is the case for the values

The above mentioned problem is also solved using the Mathematica software by using the same input. NMinimize command is used for this purpose which has several inbuilt optimization methods available. Since the problem is to find the discrete distribution

discrete uniform distribution

Minimize

n | q | |||
---|---|---|---|---|

8 | 0.3 | 4.5 | q_{8} = 0.109755 | p_{8} = 0.109755 |

7 | 0.3 | 3.94803 | q_{7} = 0.139772 | p_{7} = 0.124431 |

6 | 0.3 | 3.36823 | q_{6} = 0.14697 | p_{6} = 0.112552 |

5 | 0.3 | 2.75961 | q_{5} = 0.172774 | p_{5} = 0.112867 |

4 | 0.3 | 2.11183 | q_{4} = 0.201015 | p_{4} = 0.108628 |

3 | 0.3 | 1.41409 | q_{3} = 0.281574 | p_{3} = 0.121574 |

2 | 0.3 | 0.631956 | q_{2} = 0.921147 q_{1} = 0.0788534 | p_{2} = 0.285733 p_{1} = 0.0244598 |

The solution obtained is

The relative error is calculated using the formula

lity distribution found by mathematica software is

In source coding, one considers a set of symbols

from X with probabilities

phabet of size D, that is to map each symbol

then there exists a uniquely decodable code with these lengths, which means that any sequence

The Shannon [

is bounded below by the entropy of the source, that is, Shannon’s entropy

where the logarithm in the definition of the Shannon entropy is taken in base D. This result indicates that the Shannon entropy

The characteristic of these optimum codes is that they assign the shorter codewords to the most likely symbols and the longer codewords to unlikely symbols.

Source Coding with Campbell Measure of LengthImplicit in the use of average codeword length (16) as a criteria of performance is the assumption that cost varies linearly with code length. But this is not always the case. Campbell [

where

Minimizing the cost is equivalent to minimizing the monotonic increasing function of

So,

Campbell proved that Renyi [

where

subject to Kraft’s inequality (15) with optimal lengths given by

By choosing a smaller value of

Similar approach is applied to provide an operational significane to Tsallis [

From Renyi’s entropy of order

Substituting (22) in (3) where parameter q is replaced by

or equivalently

Equation (23) establishes a relation between Renyi’s entropy and Tsallis entropy.

From (20), we have

Case-I Now, when

Case-II when

From (24) and (25), it is observed that Tsallis entropy

when

1) Huffman [

In the following example, Huffman code is constructed using the probability distribution obtained in

Optimal code is obtained as follows

Hence the optimal code is

So, using the probability distribution generated in

2) The problem of determining an unknown discrete distribution closer to uniform distribution with known Tsallis entropy as discussed in Section 2 can be looked upon as minimum cross entropy principle which states that given any priori distribution, we should choose that distribution which satisfies the given constraints and which is closest to priori distribution. So, cross entropy optimization principles offer a relevant context for the application of method A.

The authors are thankful to University Grants Commission and Council of Scientific and Industrial Research, New Delhi, for providing the financial assistance for the preparation of the manuscript.

OmParkash,PriyankaKakkar, (2015) An Algorithm to Generate Probabilities with Specified Entropy. Applied Mathematics,06,1968-1976. doi: 10.4236/am.2015.612174