Posted by **Anonymous** on Friday, November 4, 2011 at 2:43pm.

A single sample is being used to construct a 90% confidence interval for the population mean. What would be the difference between an interval for a sample of n = 25 and the interval for a sample of n = 100? Assume that all other factors are held constant.

Options:

With n = 25, the standard error would be larger and the interval would be wider.

With n = 25, the standard error would be smaller and the interval would be narrower.

With n = 25, the standard error would be smaller and the interval would be wider.

With n = 25, the standard error would be larger and the interval would be narrower.

## Answer This Question

## Related Questions

- Statistics - Which of the following statements correctly interprets a 95% ...
- statistics - Consider a 95% confidence interval for a population mean ...
- statistics - Consider a 95% confidence interval for a population mean ...
- Math Statistics - 5. Students in an introductory statistics class were asked to ...
- Statistics - These are the only ones that I am having problems with. Please help...
- Statistics - The standard deviation for a population is σ = 15.3. A sample ...
- statistics - A simple random sample of 60 items resulted in a sample mean of 80...
- statistics - What is meant by the 95% confidence interval of the mean ? That 95...
- Statistics - What is meant by the 95% confidence interval of the mean ? That 95...
- statistics - 1. A random sample of size 49 is taken from a large population, ...

More Related Questions