The major would like to estimate the mean number of automobile crashes per day at the intersection of 2 highways. A random sample of 40 days worth of data had a mean of 4.3 crashes and a standard deviation of 0.9. How large of a sample would be needed in order to produce a 95% confidence interval with a maximum estimate of the error of 0.1?

To determine the sample size needed to estimate the mean number of automobile crashes per day at the intersection of 2 highways with a maximum estimate error of 0.1 and a 95% confidence interval, you can use the formula:

n = (Z * σ / E)²

where:
n is the sample size
Z is the z-score corresponding to the desired confidence level (95% confidence level corresponds to a z-score of approximately 1.96)
σ is the standard deviation of the population (given as 0.9 crashes)
E is the maximum estimate error (given as 0.1 crashes)

Plugging in the values:

n = (1.96 * 0.9 / 0.1)²
n = 17.64²
n ≈ 310.85

Therefore, a sample size of approximately 311 days worth of data would be needed to produce a 95% confidence interval with a maximum estimate error of 0.1.