Let T1,T2,…,Tn be i.i.d. observations, each drawn from a common normal distribution with mean zero. With probability 1/2 this normal distribution has variance 1, and with probability 1/2 it has variance 4.

Based on the observed values t1,t2,…,tn, we use the MAP rule to decide whether the normal distribution from which they were drawn has variance 1 or variance 4. The MAP rule decides that the underlying normal distribution has variance 1 if and only if

∣c1*∑(i=1 to n)ti^2 + c2*∑(i=1 to n)ti∣ < 1.

Find the values of c1≥0 and c2≥0 such that this is true. Express your answer in terms of n, and use 'ln' to denote the natural logarithm function, as in 'ln(3)'.

To find the values of c1 and c2 that satisfy the given condition, we need to apply the Maximum A Posteriori (MAP) rule.

The MAP rule states that we choose the hypothesis that maximizes the posterior probability given the observed data. In this case, we need to maximize the posterior probability of the normal distribution having variance 1 or variance 4.

Let's denote H_1 as the hypothesis that the normal distribution has variance 1, and H_2 as the hypothesis that the normal distribution has variance 4.

According to Bayes' theorem, the posterior probability of H_1 given the observed data can be calculated as:

P(H_1|t1,t2,...,tn) = (P(H_1) * P(t1,t2,...,tn|H_1)) / P(t1,t2,...,tn),

where P(H_1) is the prior probability of H_1, P(t1,t2,...,tn|H_1) is the likelihood of the observed data given H_1, and P(t1,t2,...,tn) is the evidence or marginal likelihood.

Similarly, the posterior probability of H_2 given the observed data can be calculated as:

P(H_2|t1,t2,...,tn) = (P(H_2) * P(t1,t2,...,tn|H_2)) / P(t1,t2,...,tn).

Given that both H_1 and H_2 have equal prior probabilities of 1/2, the prior probabilities cancel out and we only need to compare the likelihoods.

For H_1, the likelihood is given by the probability density function (PDF) of the normal distribution with variance 1:

P(t1,t2,...,tn|H_1) = (1/sqrt(2*pi))^n * exp(-(1/2)*∑(i=1 to n)ti^2).

For H_2, the likelihood is given by the PDF of the normal distribution with variance 4:

P(t1,t2,...,tn|H_2) = (1/sqrt(8*pi))^n * exp(-(1/8)*∑(i=1 to n)ti^2).

To determine the values of c1 and c2, we need to compare the logarithms of these likelihoods:

ln(P(t1,t2,...,tn|H_1)) = n*ln(1/sqrt(2*pi)) - (1/2)*∑(i=1 to n)ti^2,

ln(P(t1,t2,...,tn|H_2)) = n*ln(1/sqrt(8*pi)) - (1/8)*∑(i=1 to n)ti^2.

Now, let's define the inequality condition for the MAP rule:

|c1*∑(i=1 to n)ti^2 + c2*∑(i=1 to n)ti| < 1.

By substituting the logarithms of the likelihoods, we get:

|c1*(ln(P(t1,t2,...,tn|H_1))) + c2*(ln(P(t1,t2,...,tn|H_2)))| < 1.

Finally, we can substitute the expressions for ln(P(t1,t2,...,tn|H_1)) and ln(P(t1,t2,...,tn|H_2)) as derived above to obtain the inequality in terms of n, c1, and c2.

To find the values of c1 and c2 that satisfy the given condition, we need to consider the likelihood ratios.

Let's denote the likelihood ratio for each observation ti as LR(ti), which is defined as follows:

LR(ti) = P(ti | variance = 1) / P(ti | variance = 4)

By Bayes' theorem, we have:

P(variance = 1 | ti) = (P(ti | variance = 1) * P(variance = 1)) / (P(ti | variance = 1) * P(variance = 1) + P(ti | variance = 4) * P(variance = 4))

Since the prior probability of both variances is equal (1/2), we can simplify the above equation as:

P(variance = 1 | ti) = P(ti | variance = 1) / (P(ti | variance = 1) + P(ti | variance = 4))

Comparing this term with the given condition, we can observe that this term is equivalent to the expression:

∣c1 * ti^2 + c2 * ti∣ < 1

Therefore, we need to find the values of c1 and c2 such that the likelihood ratios satisfy this condition for all observations ti.

To do so, we can consider two cases:

1. When ti > 0:
In this case, the likelihood ratio can be represented as:
LR(ti) = P(ti | variance = 1) / P(ti | variance = 4) = e^(-ti^2/2) / (1/2) * e^(-ti^2/8) = 2 * e^(7ti^2/8)

2. When ti < 0:
In this case, the likelihood ratio can be represented as:
LR(ti) = P(ti | variance = 1) / P(ti | variance = 4) = e^(-ti^2/2) / (1/2) * e^(-ti^2/2) = e^(ti^2/2)

Now, let's consider the condition ∣c1 * ti^2 + c2 * ti∣ < 1 for both cases:

1. When ti > 0:
c1 * ti^2 + c2 * ti < 1

2. When ti < 0:
-c1 * ti^2 - c2 * ti < 1

Since the likelihood ratios are positive for both cases, we can consider them individually without taking the absolute values.

Let's solve each case separately:

1. When ti > 0:
By taking the logarithm on both sides of the inequality, we get:
ln(c1 * ti^2 + c2 * ti) < 0
ti^2 * ln(c1) + ti * ln(c2) < 0

2. When ti < 0:
By taking the logarithm on both sides of the inequality, we get:
ln(-c1 * ti^2 - c2 * ti) < 0
ti^2 * ln(-c1) + ti * ln(-c2) < 0

Since this condition needs to hold for all observations ti, we need to ensure that both inequalities hold for all ti > 0 and ti < 0.

Therefore, we can conclude that:

ln(c1) < 0
ln(c2) < 0
ln(-c1) < 0
ln(-c2) < 0

From these inequalities, we can deduce that:

c1 > 1
c2 > 1

Hence, the values of c1 and c2 that satisfy the given condition are c1 > 1 and c2 > 1.