Let X,Y be discrete random variables related as Y=g(X), where g is a deterministic function. The ordering of their entropies satisfies


H(X)≤H(Y), equality holds if g is a one to one mapping

H(X)≥H(Y), equality holds if g is a one to one mapping

H(X)≤H(Y), equality holds when H(X|g(X))=0

H(X)≥H(Y), equality holds when H(X|g(X))=0

The correct ordering of entropies between X and Y is:

H(X) ≤ H(Y), equality holds when H(X|g(X)) = 0.

Explanation:
- Entropy measures the uncertainty or randomness in a random variable.
- H(X) represents the entropy of random variable X.
- H(Y) represents the entropy of random variable Y.
- In this case, Y is a deterministic function g(X) of X, meaning the value of Y depends solely on the value of X.
- If g is a one-to-one mapping (i.e., each value of X uniquely determines a value of Y and vice versa), then each value of X corresponds to a unique value of Y, and vice versa. In this case, there is no uncertainty or randomness in the relationship between X and Y, and H(X|g(X)) = 0. Therefore, equality holds: H(X) = H(Y).
- If g is not a one-to-one mapping, there can be some redundancy or uncertainty in the relationship between X and Y. In this case, H(X) ≤ H(Y), as there may be multiple values of X that map to the same value of Y, resulting in a higher entropy for Y compared to X.

To determine which statement is correct, let's first understand the concept of entropy and conditional entropy.

Entropy:
Entropy is a measure of the uncertainty or randomness associated with a random variable. For a discrete random variable X, the entropy H(X) is given by the formula:

H(X) = - Σ p(x) log₂ p(x)

where p(x) is the probability mass function of X. In simple terms, entropy tells us how much information is required, on average, to convey the outcomes of X.

Conditional Entropy:
Conditional entropy quantifies the average uncertainty of a random variable given the value of another random variable. For two discrete random variables X and Y, the conditional entropy H(X|Y) is given by:

H(X|Y) = - Σ p(x,y) log₂ p(x|y)

where p(x,y) is the joint probability mass function of X and Y, and p(x|y) is the conditional probability mass function of X given Y.

Now let's analyze the given statements:

1. H(X)≤H(Y), equality holds if g is a one-to-one mapping:
This statement is true. If the function g is one-to-one (injective), it means that each value of Y corresponds to a unique value of X. In this case, there is no ambiguity or uncertainty introduced by the transformation from X to Y. As a result, the entropy of Y cannot be greater than the entropy of X. Equality holds because there is a direct correspondence between the two random variables.

2. H(X)≥H(Y), equality holds if g is a one-to-one mapping:
This statement is false. As explained above, when g is a one-to-one mapping, the entropy of Y cannot exceed the entropy of X. Therefore, H(X)≤H(Y), not H(X)≥H(Y).

3. H(X)≤H(Y), equality holds when H(X|g(X)) = 0:
This statement is false. H(X|g(X)) represents the conditional entropy of X given g(X). If H(X|g(X)) = 0, it means that knowing the value of g(X) completely determines the value of X, indicating a one-to-one correspondence. However, the equality H(X)≤H(Y) does not depend on the conditional entropy being zero. Hence, this statement is incorrect.

4. H(X)≥H(Y), equality holds when H(X|g(X)) = 0:
This statement is true. When H(X|g(X)) = 0, it implies that knowing the value of g(X) completely determines the value of X, indicating a one-to-one correspondence. In this case, H(X) cannot be smaller than H(Y) because all the information in X is preserved in Y. Therefore, H(X)≥H(Y). Equality holds because there is a one-to-one mapping between X and Y.

In conclusion, the correct statement is: H(X)≥H(Y), equality holds when H(X|g(X))=0.