# Statistic

posted by
**Anonymous** on
.

You plan to conduct a political poll of people in Berkeley, asking whether people prefer apples (A) or bananas (B). You have been endowed with the ability to select people completely at random from the population of Berkeley. If you ask n randomly selected people which they prefer and calculate the fraction that prefer apples, the variance of your result will be about 0.25 / n (I am telling you this as a fact). You would like to choose n to

be large enough that your 95% confidence interval will extend about 0.03 above your

estimate (whatever it turns out to be) and about 0.03 below your estimate (if you read the newspaper, political polls are always reported with “a 3% margin of error”, and that is what I am defining). By definition, a 95% confidence interval roughly needs to extend 2 times the standard deviation from the estimated mean (in this case, the mean is the fraction that prefer apples; it turns out this will be very close to normally distributed even though the numbers are fractions and can have only discrete values). Using that rule, how many samples do you need for your 95% confidence interval to extend 0.03 above and 0.03 below the mean?