Monday, May 9, 2022

INFORMATION THEORY: DIGITAL COMMUNICATION

Source of a communication system is Information, can be analog or digital. It’s a mathematical approach to the study of coding of information with storage, quantification, and communication of information.

Conditions of Occurrence of Events

Considering an event, there are three conditions of occurrence.

  • If no event, then there is uncertainty condition.
  • If event just happened, there is a surprise condition.
  • If the event occurred a time back, there is a condition of having some information.

Above three events might occur at different times. The differences in these conditions gives us knowledge on the probabilities of the event occurrence.

Entropy

The possibilities of the occurrence of an event, either surprising or uncertain would be, we are trying to have an idea on the average content of the information from the source of the event.

Entropy is the measure of the average data per source symbol. Claude Shannon, the “father of the Information Theory” provided a formula for it as -

H=ipilogbpi

Where pi is the probability of the occurrence of character number i from given character stream and b is the algorithm base.

Conditional Entropy is the amount of uncertainty remaining about the input after observing the channel output, is known as Conditional Entropy denoted by H(x∣y).

Mutual Information

Consider a channel whose input is X and output is Y.

Consider the entropy for prior uncertainty be X = H(x)

(This is assumed before the input is applied)

Consider the conditional entropy to know more about the uncertainty of the output after the input is applied. Y = yk

H(xyk)=j=0j1p(xjyk)log2[1p(xjyk)]

This is just a random variable for H(X∣y=y0)...............H(X∣y=yk) with probabilities p(y0)............p(yk−1respectively.

The mean value of H(X∣y=yk) for output alphabet y is −

H(XY)=k=0k1H(Xy=yk)p(yk)

=  k = 0 k  1  j = 0 j  1 p ( x j  y k ) p ( y k ) log 2  [ 1 p ( x j  y k ) ]

=k=0k1j=0j1p(xj,yk)log2[1p(xjyk)]

Considering the both the uncertainty conditions, i.e., before and after applying the inputs, we got the difference. H(x)−H(x∣y) must represent the uncertainty about the channel input that is resolved by observing the channel output.

This is termed as the Mutual Information of the channel.

Mutual Information is denoted as I(x;y), writing the whole thing in an equation, as below

I(x;y)=H(x)H(xy)

Hence, this is the equational representation of Mutual Information.

Properties of Mutual information

These are the properties of Mutual information.

  • Channel has the symmetric Mutual Information.

I(x;y)=I(y;x)

  • It is non-negative.

I(x;y)0

  • Mutual information can be expressed in terms of entropy of the channel output.

I(x;y)=H(y)H(yx)

Where H(y∣x)is called the conditional entropy.

Joint entropy of the channel input and the channel output is the mutual information of a channel.

I(x;y)=H(x)+H(y)H(x,y)

Where the joint entropy H(x,y) is defined by

H(x,y)=j=0j1k=0k1p(xj,yk)log2(1p(xi,yk))

Channel Capacity

Channel Capacity is the maximum average mutual information, in an instant of a signaling interval, when transmitted by a discrete memory less channel, the probabilities of the rate of maximum reliable transmission of data. Denoted by C and is measured in bits per channel use.

Discrete Memoryless Source

A source where the information is emitted at successive intervals and is independent of previous values is known as discrete memoryless source.

As the time interval is not continuous and discrete, source is discrete. Since the data is fresh at each instant of time, the source is memoryless without taking the previous values into consideration.

No comments: