In this problem, we study a simple noisy communication channel. Suppose that X is a binary signal that takes value -1 and 1 with equal probability. This signal X is sent through a noisy communication channel, and the medium of transmission adds an independent noise term. More precisely, the received signal is Y=X+N, where N is standard normal, indpendendent of X.

The decoder receives the value y of Y, and decides whether X was 1 or -1, using the following decoding rule: it decides in favor of 1 if and only if

\text {}\quad \mathbf{P}(X=1|Y=y)>2\mathbf{P}(X=-1|Y=y).

It turns out that the decoding rule can be expressed in the form: decide in favor of 1 if and only if Y>t, for some threshold t. Find the threshhold t.

As an intermediate step, find p_1\triangleq \mathbf{P}(X=1|Y=y).
p_1=\quad

Now find t.
t=\quad

To find the threshold t, we need to first find p_1, which is the probability that X = 1 given Y = y.

Using Bayes' theorem, we have:

p_1 = P(X = 1 | Y = y)
= P(Y = y | X = 1) * P(X = 1) / P(Y = y)

Since X and N are independent, we can rewrite P(Y = y | X = 1) as P(N = y - 1). And since N is standard normal, we know its probability density function is given by:

f(y) = (1 / sqrt(2π)) * exp(-(y^2) / 2)

Therefore, we have:

p_1 = P(N = y - 1) * 0.5 / P(Y = y)

Now, P(Y = y) can be calculated by marginalizing over the possible values of X:

P(Y = y) = P(Y = y, X = 1) + P(Y = y, X = -1)
= P(X = 1) * P(Y = y | X = 1) + P(X = -1) * P(Y = y | X = -1)
= 0.5 * P(N = y - 1) + 0.5 * P(N = y + 1)

Substituting this back into the expression for p_1, we get:

p_1 = P(N = y - 1) * 0.5 / (0.5 * P(N = y - 1) + 0.5 * P(N = y + 1))
= P(N = y - 1) / (P(N = y - 1) + P(N = y + 1))

Now, we can simplify this expression by evaluating the probability density function of N at y - 1 and y + 1:

p_1 = f(y - 1) / (f(y - 1) + f(y + 1))

Simplifying further, we get:

p_1 = (1 / sqrt(2π)) * exp(-((y - 1)^2) / 2) / ((1 / sqrt(2π)) * exp(-((y - 1)^2) / 2) + (1 / sqrt(2π)) * exp(-((y + 1)^2) / 2))
= exp(-((y - 1)^2) / 2) / (exp(-((y - 1)^2) / 2) + exp(-((y + 1)^2) / 2))

Now, to find the threshold t, we set p_1 > 2p_2:

p_1 > 2(1 - p_1)
1 > 2 - 2p_1
p_1 > 1 / 2

Therefore, t = 1/2.

To find the threshold t, we need to first calculate p_1.

Given that Y = X + N, where N is standard normal and independent of X, we can write the conditional probability as:

P(X=1 | Y=y) = P(X=1, Y=y) / P(Y=y)

To calculate P(X=1, Y=y), we can use the fact that X takes values -1 and 1 with equal probability. So,

P(X=1, Y=y) = P(X=1) * P(Y=y | X=1)
= 1/2 * P(Y=y | X=1)

Since Y = X + N, we can substitute Y with X + N:

P(X=1, Y=y) = 1/2 * P(X+ N = y | X=1)

Substituting the value of Y with X + N in the decoding rule, we get:

P(X=1 | Y=y) > 2 * P(X=-1 | Y=y)
1/2 * P(X+ N = y | X=1) > 2 * 1/2 * P(X+ N = y | X=-1)

Simplifying the above expression, we get:

P(X+ N = y | X=1) > P(X+ N = y | X=-1)

Since N is standard normal and independent of X, we can write:

P(N = y-1 | X=1) > P(N = y+1 | X=-1)

Now, for a standard normal distribution, the probability density function at a point is given by:

f(x) = (1/sqrt(2*pi)) * e^(-x^2/2)

So, we have:

P(N = y-1 | X=1) = f(y-1)
P(N = y+1 | X=-1) = f(y+1)

Substituting these values back, we get:

f(y-1) > f(y+1)

Now, we need to find the value of y such that the above inequality holds.

To find the threshold t, we set y-1 = t and y+1 = t, and solve for y:

f(t) > f(t)
e^(-t^2/2) > e^(-t^2/2)

This inequality holds for all values of t.

So, the threshold t can be any value, implying that there is no unique threshold for this decoding rule.