Consider a Gaussian linear model Y=aX+\epsilon in a Bayesian view. Consider the prior \pi (a)=1 for all a\in \mathbb {R}. Determine whether each of the following statements is true or false.
\pi (a) a uniform prior.
True
False
\pi (a) is a Jeffreys prior when we consider the likelihood L(Y=y|A=a, X=x) (where we assume x is known).
True
False
Consider a linear regression model \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon } where
{\boldsymbol \varepsilon }\in \mathbb {R}^ n is a random vector with \mathbb {E}[{\boldsymbol \varepsilon }]=\mathbf{0}, \mathbb {E}[{\boldsymbol \varepsilon }{\boldsymbol \varepsilon }^ T]=I_ n, and no further assumptions are made about {\boldsymbol \varepsilon }
\mathbb {X} is an n by p deterministic matrix, and \mathbb {X}^ T\mathbb {X} is invertible.
\sigma >0 is an unknown constant.
Let \hat{{\boldsymbol \beta }} denote the least squares estimator of \beta in this context. Determine whether each of the the following statements is true or false.
\hat{{\boldsymbol \beta }} is the maximum likelihood estimator for {\boldsymbol \beta }.
True
False
With the model written as \, \mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon },\, \hat{{\boldsymbol \beta }} has dimension 1 \times p (i.e. is a row vector of length p).
True
False
\hat{\beta } has a Gaussian distribution (even for small n).
True
False
Under the same setup and assumptions, \mathbb {X}\hat{{\boldsymbol \beta }} is...
(Check all that apply.)
Equal to (\mathbb {X}^ T\mathbb {X})^{-1}\mathbb {X}^{T}\mathbf{Y}
An unbiased estimator of \mathbb {X}{\boldsymbol \beta }
A vector in \mathbb {R}^ p
False (it is a column vector)
The statements are as follows:
1. \(\pi (a)\) is a uniform prior.
- False
2. \(\pi (a)\) is a Jeffreys prior when we consider the likelihood \(L(Y=y|A=a, X=x)\) (where we assume x is known).
- True
3. \(\hat{{\boldsymbol \beta }}\) is the maximum likelihood estimator for \({\boldsymbol \beta }\).
- True
4. With the model written as \(\mathbf{Y}=\mathbb {X}{\boldsymbol \beta }+\sigma {\boldsymbol \varepsilon }\), \(\hat{{\boldsymbol \beta }}\) has dimension 1 × p (i.e. is a row vector of length p).
- True
5. \(\hat{{\boldsymbol \beta }}\) has a Gaussian distribution (even for small n).
- True
6. Under the same setup and assumptions, \(\mathbb {X}\hat{{\boldsymbol \beta }}\) is... (Check all that apply.)
- Equal to \((\mathbb {X}^ T\mathbb {X})^{-1}\mathbb {X}^{T}\mathbf{Y}\)
- An unbiased estimator of \(\mathbb {X}{\boldsymbol \beta }\)
- A vector in \(\mathbb {R}^ p\)