A single sample is being used to construct a 90% confidence interval for the population mean. What would be the difference between an interval for a sample of n = 25 and the interval for a sample of n = 100? Assume that all other factors are held constant.

Options:

With n = 25, the standard error would be larger and the interval would be wider.

With n = 25, the standard error would be smaller and the interval would be narrower.

With n = 25, the standard error would be smaller and the interval would be wider.

With n = 25, the standard error would be larger and the interval would be narrower.

SEm = SD/√n

This should allow you to answer the question.

To answer this question, we need to understand how the sample size, n, affects the construction of confidence intervals.

A confidence interval is a range of values that is likely to contain the true population parameter. In this case, we are constructing a confidence interval for the population mean.

When constructing a confidence interval for the population mean, there are two factors that influence the width of the interval: the standard error and the critical value.

The standard error quantifies the variability of the sample mean, and it is calculated as the standard deviation of the sample divided by the square root of the sample size. So, as the sample size increases, the standard error decreases. This happens because larger samples tend to provide more precise estimates of the population mean.

The critical value is determined by the desired level of confidence. A 90% confidence interval implies that 90% of all possible intervals would contain the true population mean if the process were repeated many times.

Now, let's apply this understanding to the given options:

Option 1: With n = 25, the standard error would be larger and the interval would be wider.

This option is incorrect because we know that as the sample size increases, the standard error decreases. Therefore, with n = 25, the standard error would be smaller, not larger.

Option 2: With n = 25, the standard error would be smaller and the interval would be narrower.

This option is correct. As mentioned above, when the sample size increases, the standard error decreases. Consequently, the interval becomes narrower because the estimate of the population mean becomes more precise.

Option 3: With n = 25, the standard error would be smaller and the interval would be wider.

This option is incorrect. We already established that with a larger sample size, the standard error decreases, resulting in a narrower interval.

Option 4: With n = 25, the standard error would be larger and the interval would be narrower.

This option is incorrect. As previously explained, with an increase in sample size, the standard error decreases, leading to a narrower interval.

In conclusion, the correct answer is:

With n = 25, the standard error would be smaller and the interval would be narrower.