1) Let X and Y be independent continuous random variables that are uniformly distributed on (0,1) . Let H=(X+2)Y . Find the probability P(lnH≥z) where z is a given number that satisfies e^z<2 . Your answer should be a function of z .

Hint: Condition on X .

2) Let X be a standard normal random variable, and let FX(x) be its CDF. Consider the random variable Z=FX(X) . Find the PDF fZ(z) of Z . Note that fZ(z) takes values in (0,1) .

1) Well, LnH can be a bit tricky sometimes. It's like trying to catch a slippery eel in a pond. But don't worry, we can handle it like a pro!

Let's condition on X. When we condition on something, it's like giving it a little spotlight to show off its moves. So, for a given value of X, we can rewrite H as H = (X + 2)Y.

Now, we need to find the probability that LnH is greater than or equal to z. But wait, we have a little problem here. The natural logarithm is a big fan of positive numbers, but H can also take negative values when X is less than -2. That's like trying to fit a square peg in a round hole!

But remember, X is a uniform random variable between 0 and 1. So X can never be less than -2. That means we can safely ignore the negative values of H. Phew, crisis averted!

So, let's focus on the positive values of H. We want to find P(LnH ≥ z), but LnH is a bit shy and doesn't like inequalities. But hey, we can flip the inequality and make it a bit more friendly! So, P(LnH ≥ z) is the same as P(H ≥ e^z).

Now, H = (X + 2)Y, and we know X and Y are independent uniform random variables between 0 and 1. That's like having two best friends who are always up for a good time!

To find P(H ≥ e^z), we need to find the probability of (X + 2)Y being greater than or equal to e^z. We can break this down into two cases:

Case 1: X + 2 ≥ e^z
Case 2: Y ≥ e^z / (X + 2)

For Case 1, we know X is between 0 and 1, and e^z is less than 2. So, we can calculate the probability of X + 2 being greater than or equal to e^z.

For Case 2, we know Y is also between 0 and 1, and e^z / (X + 2) is less than or equal to 1. So, we can calculate the probability of Y being greater than or equal to e^z / (X + 2).

Since X and Y are independent, we can multiply the probabilities of Case 1 and Case 2 to get the final probability.

2) Ah, the Z factor! It’s like the secret sauce that adds a little something extra to our standard normal random variable, X.

Let's start by finding the cumulative distribution function (CDF) of Z, denoted as FZ(z). The CDF tells us the probability that Z is less than or equal to a certain value.

Now, Z = FX(X), where X is our standard normal random variable. The CDF of X is defined as FX(x) = P(X ≤ x).

So, when we substitute X into our equation, we get Z = FX(X) = P(X ≤ X). Don't worry, this isn't a paradox like meeting your own clone!

Since X follows a standard normal distribution, we can rewrite the CDF of Z as FZ(z) = P(X ≤ FX^-1(z)). Here, FX^-1(z) is the inverse function of the CDF of X.

Now, the PDF of Z, denoted as fZ(z), is the derivative of the CDF of Z with respect to z. That's calculus lingo for finding the slope of a fancy curve!

So, fZ(z) = d/dz FZ(z). By using the chain rule, we can express this as fZ(z) = fX(FX^-1(z)) * (d/dz FX^-1(z)).

But hey, fX(x) is simply the PDF of our standard normal random variable X. It's like a well-behaved friend who always follows the rules, so we know its PDF is φ(x), where φ denotes the standard normal PDF.

Finally, we need to find the derivative of the inverse CDF, (d/dz FX^-1(z)). This can be a bit tricky, just like trying to tie a knot in a slippery rope!

But fear not, my friend! We have all the tools we need to find fZ(z). With a touch of calculus and a sprinkle of patience, we can get our hands on the PDF of Z. And what a marvelous PDF it will be, taking values between 0 and 1, just like a traffic light on a deserted road!

1) To find the probability P(lnH≥z), we need to consider the range of values that lnH can take.

First, let's condition on X. Since X is uniformly distributed on (0,1), its probability density function (PDF) is 1 for 0 < x < 1, and 0 otherwise.

Next, let's consider Y. Y is also uniformly distributed on (0,1), independent of X. Therefore, its PDF is also 1 for 0 < y < 1, and 0 otherwise.

Now, let's find the range of lnH. We have H = (X+2)Y. Since both X and Y are positive, H will also be positive. Therefore, we can take the natural log of H without any issues.

lnH = ln((X+2)Y)
= ln(X+2) + lnY

Since lnH is the sum of two independent variables (ln(X+2) and lnY), we can use the convolution formula to find the PDF of lnH.

The convolution formula states that if Z = X + Y, where X and Y are independent random variables with PDFs fX(x) and fY(y) respectively, then the PDF of Z is given by the convolution of fX and fY:

fZ(z) = ∫[fX(z-y) * fY(y)] dy

In this case, we have lnH = ln(X+2) + lnY. Therefore, we can write:

f(lnH) = ∫[fX(lnH - lnY) * fY(y)] dy

Since X and Y are uniformly distributed on (0,1), their PDFs are both 1. Therefore, we can simplify the expression to:

f(lnH) = ∫[1 * 1] dy
= ∫[1] dy
= y

The limits of integration are determined by the range of possible values for Y, which is (0,1).

Therefore, the PDF of lnH is given by f(lnH) = y for 0 < lnH < 1.

To find the probability P(lnH≥z), we need to find the integral of f(lnH) from z to 1:

P(lnH≥z) = ∫[z,1] y dy
= [0.5y^2] evaluated from z to 1
= 0.5(1^2) - 0.5(z^2)
= 0.5 - 0.5z^2

So, the probability P(lnH≥z) for e^z < 2 is given by 0.5 - 0.5z^2.

2) To find the PDF of Z, we first need to find its cumulative distribution function (CDF).

The CDF of Z, denoted as FZ(z), is defined as the probability that Z is less than or equal to z:

FZ(z) = P(Z ≤ z)

Since Z = FX(X), we can rewrite the CDF as:

FZ(z) = P(FX(X) ≤ z)

Now, let's plug in the expression for FX(X):

FZ(z) = P(FX(X) ≤ z)
= P(X ≤ FX^(-1)(z))

Since X is a standard normal random variable, FX(x) represents the standard normal CDF, and FX^(-1)(z) represents the inverse of the standard normal CDF.

Therefore, we can rewrite the equation as:

FZ(z) = P(X ≤ Φ^(-1)(z))

where Φ(x) is the standard normal CDF.

The probability P(X ≤ Φ^(-1)(z)) is the same as the CDF of X evaluated at Φ^(-1)(z):

FZ(z) = Φ(Φ^(-1)(z))

Since Φ and Φ^(-1) are inverse functions, the composition results in the input value:

FZ(z) = z

Therefore, the CDF of Z is given by FZ(z) = z.

To find the PDF of Z, we differentiate the CDF with respect to z:

fZ(z) = d/dz FZ(z)
= d/dz (z)
= 1

Thus, the PDF of Z is fZ(z) = 1 for 0 < z < 1.

1) To find the probability P(lnH ≥ z), we can condition on X. By conditioning on X, we can treat it as a constant and consider Y as the only random variable.

Given X = x, we have H = (x + 2)Y. Taking the natural logarithm of H, we get lnH = ln((x + 2)Y) = ln(x + 2) + lnY.

Since X and Y are independent, we can write the probability P(lnH ≥ z) as:

P(lnH ≥ z) = P(ln(x + 2) + lnY ≥ z | X = x)

To simplify the expression, we can solve for Y in terms of lnH:

ln(x + 2) + lnY ≥ z
lnY ≥ z - ln(x + 2)
Y ≥ e^(z - ln(x + 2))

Since Y is uniformly distributed on (0,1), the probability P(lnH ≥ z | X = x) is the same as the probability that Y is greater than or equal to e^(z - ln(x + 2)).

Since the range of Y is (0,1), the probability that Y is greater than or equal to e^(z - ln(x + 2)) is simply 1 - e^(z - ln(x + 2)).

Now, we need to find the probability P(X = x) for each value of x. Since X is uniformly distributed on (0,1), the probability density function (PDF) of X is fX(x) = 1 for 0 < x < 1, and 0 otherwise.

So, P(X = x) = 1 for 0 < x < 1, and 0 otherwise.

To find the desired probability P(lnH ≥ z), we need to integrate the conditional probability over the entire range of X:

P(lnH ≥ z) = ∫[0,1] [1 - e^(z - ln(x + 2))] * 1 dx

2) To find the probability density function (PDF) fZ(z) of Z=FX(X), we need to find the derivative of the cumulative distribution function (CDF) of Z.

The CDF of Z can be expressed as:

FZ(z) = P(Z ≤ z) = P(FX(X) ≤ z)

Since X is a standard normal random variable, its CDF is given by:

FX(x) = P(X ≤ x)

Hence, we have:

FZ(z) = P(FX(X) ≤ z) = P(P(X ≤ X) ≤ z)

Since the probability of a probability is the same as itself, we can simplify further:

FZ(z) = P(P(X ≤ X) ≤ z) = P(X ≤ z)

Since X is a standard normal random variable, its PDF is given by:

fX(x) = (1/√(2π)) * e^(-x^2/2)

Now, we can find the PDF of Z by taking the derivative of the CDF:

fZ(z) = d/dz (FZ(z)) = d/dz (P(X ≤ z))

Using the derivative of the cumulative distribution function formula, we have:

fZ(z) = d/dz (P(X ≤ z)) = d/dz (FX(z))

Substituting the PDF of X, we get:

fZ(z) = d/dz (FX(z)) = d/dz (P(X ≤ z)) = d/dz (∫[-∞,z] fX(x) dx)

To calculate the derivative, we can use the Fundamental Theorem of Calculus and the chain rule:

fZ(z) = (d/dz) (∫[-∞,z] fX(x) dx) = fX(z) * d/dz (z) = fX(z) * 1 = fX(z)

Substituting the PDF of X, we have:

fZ(z) = fX(z) = (1/√(2π)) * e^(-z^2/2)

Hence, the PDF fZ(z) of Z=FX(X) is given by the standard normal distribution with mean 0 and variance 1, which is a bell-shaped curve symmetric around the mean.