Mechanics of Information Loss (contd.)

As a continuation of studying the mechanics of information loss starting from Lecture04 and Lecture05, recall that in our example of the Automatic Repeat reQuest, ARQ system

Fig 1 Automatic Repeat reQuest system
The DMS X emit symbols x0, x1; treating both as a single symbol ⟨x0, x1⟩ into an encoder E emitting encoded sequence ⟨x0, x1, Π⟩ where parity Π ≜ x0x1. The total information for the DMS is therefore H(C) = H(x0, x1, Π) = 2H(X).

Furthermore the erasure channel is used 3× to transmit the 3-bit code resulting in a coded information rate of 2H(X) × 1 ∕ 3. The erasure channel is represented in directed graph as Y3 erasure channel

Fig 2 Directed graph of the erasure channel
Note that p(xi) and p(Π) for the encoder will be different.

Since most possible outputs are "error detected and re-transmit", there are 4 possible outcomes for each of the 4 cases. Therefore, Decodable Outcome = 4 × 4 = 16. Probabilities for these decodable outcomes was Ps = 0.896.

Therefore, for all the remaining outcomes (which the sink requests for a re-transmit), the Probabilities for the re-transmit will be

P sub r = 0.104

The question then is, for perfect transmission (i.e., without wrong symbol transmission)

What is the information rate penalty?

In other words,

How often, on the average must a triplet (x sub 0, x sub 1, parity) be transmitted (to successfully deliver the information)?

To answer this let us represent the transmission process graphically. Consider three states: ◯ Transmission state, ◯ Decode state and ◯ Accept state such that, probability to transmit, P = 1. This is shown as transmission state to decode state with probability P. For respective decoding case, the probability to decode Ps = 0.896. Thus decode state to accept state with probability P sub S. Also, for each case there are four possibilities but except for one the remaining possibilities are requested as re-transmit by the sink. The probability to re-transmit is Pr = 1 − Ps or decode state to transmission state with probability P sub r.

This is called a Markov Process.
Fig 3 Illustration of Markov process

Let us define

T is defined as the expected number of times one of the ordered triplet symbols (x sub 0, x sub 1, parity) have to be transmitted
Then by definition of Expected Values
T = 1 over P sub S
Thus for our example Pr = 0.104 ⇒ T = 1.11607. Thus the expected number of times one of the triplet symbols ⟨x0, x1, Π⟩ have to be transmitted is ≈12% increase in number of transmission (TPs + 0.12 = 1.12).

Let us also define Effective Information Rate

R sub eff defined as information rate times T
Thus for our case
R sub eff = 0.57998
Recall that I(X; Y) = 0.77676 thus
R sub eff is lessthan I(X; Y)

[If I(X; Y) is maximized w.r.t p(xi) at the source to channel capacity and R sub eff is lessthan max of I(X; Y). Then, transmission with arbitrary information loss is possible (eg. Turbo Codes).]

Referring to the above directed graph:

H(X) = 0.97095
H(Y) = 1.499
and
I(X; Y) = 0.77676
Thus
I(X; Y) is lessthan H(X) is lessthan H(Y)
This extra information (H(X) < H(Y)) is due to noise information. In other words, some kind of coding is necessary for above inequality.

Recall that information may be classified in terms of quality as useful and useless information. Using this classification

Next:

A Basic Cryptography System (p:2) ➽