You observe k i.i.d. copies of the discrete uniform random variable Xi, which takes values 1 through n with equal probability.

Define the random variable M as the maximum of these random variables, M=maxi(Xi).

Problem 1(a)
Find the probability that M≤m, as a function of m, for m∈{1,2,…,n}.

Problem 1(b)
Find the probability that M=1.

Problem 1(c)
Find the probability that M=m for m∈{2,3,…n}.

Problem 1(d)
For n=2, find E[M] and Var(M) as a function of k.

E[M]=
Var[M]=

Problem 1(e)
As k (the number of samples) becomes very large, what is E[M] in terms of n?

As k→∞, E[M]→

Problem 1(a)

The probability that M ≤ m is equal to the probability that all k samples are less than or equal to m. Since each sample is uniformly distributed between 1 and n, the probability of each sample being less than or equal to m is (m/n). Therefore, the probability that M ≤ m is (m/n)^k.

Problem 1(b)
The probability that M = 1 is the probability that all k samples are equal to 1. Since each sample is equally likely to be any value between 1 and n, the probability of each sample being equal to 1 is 1/n. Therefore, the probability that M = 1 is (1/n)^k.

Problem 1(c)
The probability that M = m is the probability that at least one of the k samples is equal to m and all other samples are less than m. The probability that a single sample is equal to m is 1/n, and the probability that it is less than m is (m-1)/n. Therefore, the probability that at least one sample is equal to m and all other samples are less than m is 1 - ((m-1)/n)^k.

Problem 1(d)
For n = 2, the random variable M can only take values of 1 and 2. Since M takes values of 1 and 2 with equal probability, we have:

E[M] = (1 + 2) / 2 = 3/2
Var[M] = (1 - 3/2)^2 * 1/2 + (2 - 3/2)^2 * 1/2 = 1/4

Problem 1(e)
As k approaches infinity, the probability that all samples are less than or equal to m approaches 1. Therefore, the probability that M ≤ m approaches 1 for all values of m. As a result, the expected value of M, E[M], approaches the maximum value of M, which is n. So, as k approaches infinity, E[M] approaches n.

Problem 1(a):

To find the probability that M≤m, we need to consider the probability that each of the k random variables Xi is less than or equal to m. Since the random variables are independent and uniformly distributed, each has a probability of 1/n of being less than or equal to m.

Therefore, the probability that M≤m is given by P(M≤m) = (1/n)^k.

Problem 1(b):

To find the probability that M=1, we need to consider the probability that all k random variables Xi are equal to 1. Since each random variable has a probability of 1/n of being equal to 1, the probability that M=1 is given by P(M=1) = (1/n)^k.

Problem 1(c):

To find the probability that M=m (for m∈{2,3,…n}), we need to consider the probability that at least one of the k random variables Xi is equal to m.

The complement of this event is that none of the random variables is equal to m. Since each random variable has a probability of 1/n of being equal to m, the probability that none of the random variables is equal to m is (1 - 1/n).

Therefore, the probability that M=m is given by P(M=m) = 1 - (1 - 1/n)^k.

Problem 1(d):

For n=2, the random variable M can only take values 1 and 2.

E[M] represents the expected value of M, which is calculated as the sum of all possible values of M multiplied by their respective probabilities.

E[M] = 1 * P(M=1) + 2 * P(M=2)
= 1 * (1/n)^k + 2 * (1 - (1 - 1/n)^k)

Var[M] represents the variance of M, which is calculated as the sum of the squared difference between each possible value of M and the expected value, multiplied by their respective probabilities.

Var[M] = (1 - E[M])^2 * P(M=1) + (2 - E[M])^2 * P(M=2)

Problem 1(e):

As k (the number of samples) becomes very large, the probability that M=m for m∈{2,3,…n} approaches 1, meaning that M is almost certain to be equal to the maximum possible value in the range of values for Xi, which is n.

Therefore, as k→∞, E[M]→n.