Let

\displaystyle \psi : \mathbb {R} \times (0, \infty ) \displaystyle \to \mathbb {R}^2
\displaystyle (\mu , \sigma ) \displaystyle \mapsto (m_1(\mu , \sigma ), m_2(\mu , \sigma )).
denote the moments map considered in the previous problem, where m_ k(\mu , \sigma ) denotes the k-th moment of the distribution N(\mu , \sigma ^2 ).

Is \psi one-to-one on the domain \mathbb {R} \times (0, \infty )? (Equivalently, given the outputs m_1 and m_2, can we use them to uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0?)

Yes, the map \psi is one-to-one on the domain \mathbb {R} \times (0, \infty ).

To see why, suppose that we have two pairs (\mu_1 , \sigma_1 ) and (\mu_2 , \sigma_2 ) in the domain of \psi such that \psi(\mu_1 , \sigma_1 ) = \psi(\mu_2 , \sigma_2 ). Then we must have both m_1(\mu_1 , \sigma_1 ) = m_1(\mu_2 , \sigma_2 ) and m_2(\mu_1 , \sigma_1 ) = m_2(\mu_2 , \sigma_2 ).

Using the fact that the k-th moment of a normal distribution N(\mu , \sigma ^2 ) is \mu ^k + k\mu ^{k-2}\sigma ^2 + ... (with the ellipsis denoting terms involving higher powers of \sigma ), we can write:

m_1(\mu , \sigma ) = \mu
m_2(\mu , \sigma ) = \mu ^2 + \sigma ^2

Thus, from the conditions m_1(\mu_1 , \sigma_1 ) = m_1(\mu_2 , \sigma_2 ) and m_2(\mu_1 , \sigma_1 ) = m_2(\mu_2 , \sigma_2 ), we have:

\mu_1 = \mu_2
\mu_1 ^2 + \sigma_1 ^2 = \mu_2 ^2 + \sigma_2 ^2

From the first equation, we obtain \mu_1 = \mu_2 . Substituting this into the second equation, we obtain:

\sigma_1 ^2 = \sigma_2 ^2

Thus, we have shown that if \psi(\mu_1 , \sigma_1 ) = \psi(\mu_2 , \sigma_2 ), then \mu_1 = \mu_2 and \sigma_1 = \sigma_2 , which implies that the map \psi is one-to-one on the domain \mathbb {R} \times (0, \infty ). This means that given the outputs m_1 and m_2, we can use them to uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0.

If \psi is one-to-one on the given domain and \psi (\mu , \sigma ) = (m_1, m_2), what is \mu expressed in terms of m_1 and m_2? (If \psi is not one-to-one, enter 0.)

Since \psi is one-to-one on the given domain, we can use the inverse function \psi^{-1} to express \mu in terms of m_1 and m_2. In other words, we want to find \mu such that \psi(\mu, \sigma) = (m_1, m_2) for some \sigma > 0. Let's first solve for \sigma using the second component of \psi:

m_2 = \mu^2 + \sigma^2

Rearranging gives:

\sigma^2 = m_2 - \mu^2

Taking the positive square root (since \sigma > 0), we get:

\sigma = \sqrt{m_2 - \mu^2}

Substituting this into the first component of \psi, we get:

m_1 = \mu

Therefore, the solution for \mu in terms of m_1 and m_2 is simply:

\mu = m_1

Note that this is consistent with what we know about the moments of a normal distribution; the first moment is simply the mean of the distribution.

If \psi is one-to-one on the given domain and \psi (\mu , \sigma ) = (m_1, m_2), what is \sigma expressed in terms of m_1 and m_2? (If \psi is not one-to-one, enter 0.)

Using the same reasoning as in the previous question, we can solve for \sigma in terms of m_1 and m_2 as follows:

m_2 = \mu^2 + \sigma^2

Substituting \mu = m_1, we get:

m_2 = m_1^2 + \sigma^2

Rearranging gives:

\sigma^2 = m_2 - m_1^2

Taking the positive square root (since \sigma > 0), we get:

\sigma = \sqrt{m_2 - m_1^2}

Therefore, the solution for \sigma in terms of m_1 and m_2 is:

\sigma = \sqrt{m_2 - m_1^2}

To determine if the moments map \psi is one-to-one on the domain \mathbb {R} \times (0, \infty ), we need to check if different inputs yield different outputs or if there exist multiple inputs that produce the same output. In other words, we need to check if given the outputs m_1 and m_2, we can uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0.

To do this, we need to examine the explicit formulas for the moments m_1 and m_2 in terms of \mu and \sigma.

The first moment m_1(\mu, \sigma) of a normal distribution N(\mu, \sigma^2) is equal to \mu. This means that given m_1, we can uniquely determine \mu.

The second moment m_2(\mu, \sigma) of a normal distribution N(\mu, \sigma^2) is equal to \mu^2 + \sigma^2. From this equation, we can see that there does exist another pair of values (\mu', \sigma') such that \mu'^2 + \sigma'^2 = \mu^2 + \sigma^2. This means that given m_2, we cannot uniquely determine (\mu, \sigma).

Therefore, the moments map \psi is not one-to-one on the domain \mathbb {R} \times (0, \infty ). Given the outputs m_1 and m_2, we cannot use them to uniquely reconstruct \mu \in \mathbb {R} and \sigma > 0.

To determine if the moments map Ψ is one-to-one, we need to check if for any two points (μ₁, σ₁) and (μ₂, σ₂) in the domain ℝ × (0, ∞), if their corresponding outputs (m₁(μ₁, σ₁), m₂(μ₁, σ₁)) and (m₁(μ₂, σ₂), m₂(μ₂, σ₂)) are the same, then μ₁ = μ₂ and σ₁ = σ₂.

In other words, given the moments m₁ and m₂, we want to know if we can uniquely determine μ and σ.

To answer this question, we can analyze the equations for the moments m₁ and m₂ in terms of μ and σ. By definition, the k-th moment of a normal distribution N(μ, σ²) is given by:

m_k(μ, σ) = E[X^k] = ∫ x^k * f(x; μ, σ) dx

where f(x; μ, σ) is the probability density function of the normal distribution.

For the first moment m₁, we have:

m₁(μ, σ) = E[X]

This is the mean of the normal distribution, which is μ. Therefore, m₁(μ, σ) = μ.

For the second moment m₂, we have:

m₂(μ, σ) = E[X²] = ∫ x² * f(x; μ, σ) dx

Simplifying the integral, we have:

m₂(μ, σ) = μ² + σ²

Now, let's consider the reverse process. Given m₁ and m₂, can we reconstruct μ and σ?

From m₁(μ, σ) = μ, we can see that given m₁, we can uniquely determine μ.

From m₂(μ, σ) = μ² + σ², we have a relationship between m₂, μ, and σ. However, this is not a one-to-one relationship. For a given m₂, there are infinitely many pairs (μ, σ) that satisfy this equation.

Therefore, we conclude that the moments map Ψ is not one-to-one. Given the outputs m₁ and m₂, we can determine μ uniquely, but there are multiple possible values for σ.