Hard and Soft Decisions

In the example:

Observe that:

1. There is no feedback.
Therefore
is a memoryless information channel.
2. y0 and y3 receives (red) arrows only from x0 and x1 respectively with high probabilities p(y0) = 0.32 and p(y3) = 0.48.
This means occurrence of y0 and y3 are pretty sure, since there is no ambiguity..
Therefore,

The decision resulting in the outputs, y0 and y3 is called Hard decision.
and
Both y0 and y3 are called Hard decision.

3. On the other hand, y1 and y2 receives (red and blue) arrows from both x0 and x1 resulting in low probabilities p(y1) = 0.09 and p(y2) = 0.11.
This means there is ambiguity to decide from x0 and x1. In other words

occurrence of y1 and y2 are not definite.

Therefore,

The decision resulting in the outputs, y1 and y2 is called Soft decision.
and
Both y1 and y2 are called Soft decision.

4. Finally, communication theorists calls (above) such example 2input − 4output models as Soft Decision Models. The above communication system is an example of 2bit Digital to Analog converter.
Recall that, for the above communication system example H(X|Y) = 0.157533 ≠ 0. Hence information is lost through the information channel. The reason for this information loss are the soft decision symbols y1 and y2 .

## Example of Hard Decision information channel

Consider the information channel,

Thus,
1. Compute entropy H(X)
2. Compute mutual information I(X; Y)
I(x0; Y) = 0.389461
I(x1; Y) = 0.300642
Therefore,
I(X; Y) = I(x0; Y) + I(x1; Y)
= 0.690103
3. Compute information loss
The information loss from the original symbol X after output Y is given by,
H(X|Y) = H(X) − I(X; Y) = 0.280847
Thus, H(X|Y) ≠ 0 and hence there is information loss through the information channel. This information loss is due to the hard decision symbols y1 and y2.
Note that the above communication system is an example of 1bit Analog to Digital converter.

## Comparing the (above) examples, 2bit Digital to Analog vs. 1bit Analog to Digital converters.

Soft Decision Model
Eg: 2bit Digital to Analog converter
Information Measurements Hard Decision Model
Eg: 1bit Analog to Digital converter
0.970950 Entropy H(X) 0.970950
0.813417 Mutual Info. I(X; Y) 0.690103
0.157533 Info. Loss H(X|Y) 0.280847

Observe that:
1. I(X; Y) for soft decision model > that for hard decision model
2. H(X|Y) for soft decision model < that for hard decision model
Inference: Generalizing from the observations,