Consider the "experimental design" problem of choosing when to make measurements. Under the assumptions of 1/ (1+t1^2+t2^2), and under the constraints t1 <= 0, t2< =10, find the values of t1 and t2 that minimize the mean squared error associated with the MAP estimator.

To find the values of t1 and t2 that minimize the mean squared error associated with the MAP estimator, we need to maximize the posterior distribution with the given constraints.

The posterior distribution is given by:

P(t1, t2|y) ∝ P(y|t1, t2) * P(t1, t2)

where P(y|t1, t2) is the likelihood function and P(t1, t2) is the prior distribution.

Given that the likelihood function is 1/ (1+t1^2+t2^2), and the prior distribution is uniform, we can write the posterior distribution as:

P(t1, t2|y) ∝ 1/ (1+t1^2+t2^2)

To maximize the posterior, we need to minimize the squared error, which is given by:

E(t1, t2) = ∫∫ (t1-t1_true)^2 * P(t1, t2|y) dt1 dt2

E(t1, t2) = ∫∫ (t1-t1_true)^2 * (1/ (1+t1^2+t2^2)) dt1 dt2

Since we're interested in minimizing the mean squared error, we need to minimize E(t1, t2) with respect to t1 and t2 under the given constraints t1 <= 0 and t2 <= 10.

This minimization problem can be solved using optimization techniques such as gradient descent or numerical methods. Due to the complexity of the problem, an analytical solution may not be straightforward. It would require numerical optimization techniques to find the optimal values of t1 and t2 that minimize the mean squared error associated with the MAP estimator.