Assume that a standardized test is designed to have the mean score of 100 and the standard deviation of 15. At the 95% confidence interval, how large does the sample size have to be if the margin of error is to be 3 points?

Formula:

n = {[(z-value) * sd]/E}^2
...where n = sample size, sd = standard deviation, E = maximum error, and ^2 means squared.

Using the values you have in your problem:

n = {[(1.96) * 15]/3}^2

Calculate for sample size. Round your answer.

To find out how large the sample size should be, we need to calculate the margin of error at the 95% confidence level.

The margin of error formula is given by:
Margin of Error = (Z * Standard Deviation) / sqrt(n)

Where:
Z is the Z-score corresponding to the desired confidence level.
Standard Deviation is the population standard deviation.
n is the sample size.

Let's start by finding the Z-score for a 95% confidence level. The Z-score can be found using a Z-table or a statistical calculator. For a 95% confidence level, the Z-score is approximately 1.96.

Now we can rearrange the formula to solve for the sample size (n):
n = ((Z * Standard Deviation) / Margin of Error) ^ 2

Plugging in the given values:
Z = 1.96
Standard Deviation = 15
Margin of Error = 3

n = ((1.96 * 15) / 3) ^ 2
n = (29.4 / 3) ^ 2
n = (9.8) ^ 2
n ≈ 96.04

Therefore, the sample size should be at least 97 to ensure a 95% confidence level with a margin of error of 3 points. However, since the sample size must be an integer, we can round up to 97.