We have k independent and identically distributed copies of the discrete uniform random variable Xi, which takes values 1 through n with equal probability.

M = maxi (Xi)
Find the probability that M≤m, as a function of m, for m∈{1,2,…,n}.

To find the probability that M ≤ m, we need to find the probability that all k independent random variables Xi are less than or equal to m.

Since each Xi is a discrete uniform random variable with values 1 through n, the probability that each Xi is less than or equal to m is simply m/n (since each value has equal probability).

Since the random variables are independent, the probability of all k variables being less than or equal to m is the product of their individual probabilities.

Therefore, the probability that M ≤ m is (m/n)^k.

To find the probability that M=1, we need to find the probability that all k independent random variables Xi are equal to 1.

Since each Xi is a discrete uniform random variable with values 1 through n, the probability that each Xi takes the value 1 is 1/n (since each value has equal probability).

Since the random variables are independent, the probability of all k variables being equal to 1 is the product of their individual probabilities.

Therefore, the probability that M=1 is (1/n)^k.

Find the probability that M=1.

Find the probability that M=m for m∈{2,3,…n}.

To find the probability that M=m for m ∈ {2, 3, ..., n}, we need to find the probability that at least one of the k independent random variables Xi takes the value m, and the rest take values less than m.

The probability that one particular Xi takes the value m is 1/n, and the probability that the rest of the k-1 variables take values less than m is ((m-1)/n)^(k-1).

Since there are k independent variables, we need to consider all possible cases where one of the variables takes the value m and the rest take values less than m. This can be done by multiplying the probability of one variable taking the value m with the probability of the rest of the variables taking values less than m.

Therefore, the probability that M=m is given by:

P(M=m) = k * (1/n) * ((m-1)/n)^(k-1) = (k * (m-1)^(k-1))/(n^k)

For n=2, find E[M] and Var(M) as a function of k.

To find the expected value of M, E[M], we first need to find the probability distribution of M.

When n = 2, there are two possibilities for each individual random variable Xi: Xi = 1 or Xi = 2. The maximum value M can take is either 1 or 2.

To find the probability distribution of M, we need to find the probabilities P(M=1) and P(M=2).

P(M=1) is the probability that all k independent random variables Xi take the value 1. Since each Xi has a probability of 1/2, the probability that each Xi equals 1 is (1/2)^k.

P(M=2) is the probability that at least one of the k independent random variables Xi takes the value 2. This is equal to 1 - P(M=1).

Therefore, the probability distribution of M is given by:
P(M=1) = (1/2)^k
P(M=2) = 1 - (1/2)^k

To find E[M], we can use the formula for expected value:
E[M] = Σ( M * P(M) )

E[M] = (1 * (1/2)^k) + (2 * (1 - (1/2)^k) )
E[M] = (1/2)^k + 2 - 2(1/2)^k
E[M] = 2 - (1/2)^k

Now, to find the variance of M, Var(M), we can use the formula:

Var(M) = E[M^2] - (E[M])^2

E[M^2] = (1^2 * (1/2)^k) + (2^2 * (1 - (1/2)^k) )
= (1/2)^k + 4 - 4(1/2)^k
= 4 - 3(1/2)^k

Var(M) = E[M^2] - (E[M])^2
= (4 - 3(1/2)^k) - (2 - (1/2)^k)^2
= 4 - 3(1/2)^k - 4 + 4(1/2)^k - (4 - 4(1/2)^k + (1/2)^2)^2
= 4 - 3(1/2)^k - 4 + 4(1/2)^k - (4 - 4(1/2)^k + 1/4)
= 1/4

Therefore, for n=2, E[M] = 2 - (1/2)^k and Var(M) = 1/4, where k is the number of independent random variables Xi.

As k (the number of samples) becomes very large, what is E[M] in terms of n?

As k→∞, E[M]→

As k (the number of samples) becomes very large, we can analyze the behavior of E[M] in terms of n.

When k→∞, the probability that all k independent random variables Xi take the value 1 (or any specific value) approaches 0. This is because as k increases, it becomes less and less likely for all k variables to take the same specific value.

Therefore, as k→∞, the probability P(M=1) (or any specific value) approaches 0, and the value of E[M] is dominated by the value P(M=2) (the probability that at least one of the k random variables takes the value 2).

In the case of n=2, we previously found that P(M=2) = 1 - (1/2)^k.

As k→∞, (1/2)^k approaches 0, so P(M=2) approaches 1. Therefore, in the limit as k approaches infinity, E[M] approaches the maximum value M can take, which is equal to n.

Therefore, as k→∞, E[M]→ n.

To find the probability that M≤m, as a function of m, for m∈{1,2,…,n}, we need to calculate the cumulative distribution function (CDF) of the maximum of k independent and identically distributed discrete uniform random variables.

Let's break down the problem into several steps:

Step 1: Calculate the probability that a single random variable Xi is less than or equal to m.
For a discrete uniform distribution from 1 to n, each value has equal probability, so the probability of Xi≤m is (number of values less than or equal to m) / (total number of values).
In this case, since we have a uniform distribution from 1 to n, the number of values less than or equal to m is simply m. The total number of values is n. Therefore, the probability that Xi≤m is m/n.

Step 2: Calculate the probability that the maximum of k random variables, M, is less than or equal to m.
Since the random variables Xi are independent and identically distributed, the probability that M≤m is simply (probability that Xi≤m) raised to the power of k, since we are considering k independent events.
So, P(M≤m) = (m/n)^k

Step 3: Create a function for the probability, P(M≤m), as a function of m.
Now that we have the probability expression for M≤m, we can create a function that represents the probability as a function of m.
Let's denote P(M≤m) as f(m):
f(m) = (m/n)^k

This function f(m) gives the probability that the maximum value of k independent and identically distributed discrete uniform random variables is less than or equal to m, for any given value of m from 1 to n.

Note: This solution assumes that the Xi random variables are truly independent and identically distributed with a discrete uniform distribution from 1 to n.