A sample of 400 high school students showed that they spend an average of 70 minutes a day watching television with a standard deviation of 14 minutes. Another sample of 500 college students showed that they spend an average of 55 minutes a day watching television with a standard deviation of 12 minutes.

a. Construct a 99% confidence interval for the difference between the mean times spent watching television by all high school students and all college students.
b. Test at the 2.5% significance level if the mean time spent watching television per day by high school students is higher than the mean time spent watching television by college students.

Z = (mean1 - mean2)/standard error (SE) of difference between means

SEdiff = √(SEmean1^2 + SEmean2^2)

SEm = SD/√n

If only one SD is provided, you can use just that to determine SEdiff.

Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion related to the Z score.

To construct a confidence interval for the difference between the mean times spent watching television by high school students and college students, we can use the following formula:

Confidence Interval = (x1 - x2) ± Z * √((σ1^2 / n1) + (σ2^2 / n2))

Where:
- x1 and x2 are the sample means
- σ1 and σ2 are the sample standard deviations
- n1 and n2 are the sample sizes
- Z is the Z-score representing the desired confidence level

a. Constructing a 99% confidence interval for the difference:

Given:
High school sample mean (x1) = 70 minutes
High school sample standard deviation (σ1) = 14 minutes
High school sample size (n1) = 400

College sample mean (x2) = 55 minutes
College sample standard deviation (σ2) = 12 minutes
College sample size (n2) = 500

We need the Z-score for a 99% confidence level, which corresponds to an alpha level of 1% (or 0.01). Looking up the Z-score table or using a calculator, we find that the Z-score for a 99% confidence level is approximately 2.576.

Calculating the confidence interval:
Confidence Interval = (70 - 55) ± 2.576 * √((14^2 / 400) + (12^2 / 500))

Simplifying the calculation:
Confidence Interval = 15 ± 2.576 * √(0.196 + 0.0576)
Confidence Interval = 15 ± 2.576 * √0.2536
Confidence Interval = 15 ± 2.576 * 0.503

Final result:
Confidence Interval = 15 ± 1.296

Therefore, the 99% confidence interval for the difference between the mean times spent watching television by high school students and college students is approximately (13.704, 16.296) minutes.

b. Testing if the mean time spent watching television by high school students is higher than college students:

To test this, we will conduct a one-tailed hypothesis test at the 2.5% significance level.

Null hypothesis (H0): The mean time spent watching television by high school students is equal to or less than the mean time spent by college students. µ1 ≤ µ2

Alternate hypothesis (Ha): The mean time spent watching television by high school students is higher than the mean time spent by college students. µ1 > µ2

Assuming the null hypothesis is true, we will calculate the test statistics (Z-test) using the formula:

Z = (x1 - x2) / √((σ1^2 / n1) + (σ2^2 / n2))

Z = (70 - 55) / √((14^2 / 400) + (12^2 / 500))
Z = 15 / √(0.196 + 0.0576)
Z = 15 / √0.2536
Z ≈ 15 / 0.503
Z ≈ 29.822

Using a Z-table or calculator, we find that the critical Z-value for a 2.5% significance level (right-tailed test) is approximately 1.96.

Since the calculated Z-value (29.822) is greater than the critical Z-value (1.96), we reject the null hypothesis and conclude that the mean time spent watching television by high school students is significantly higher than the mean time spent by college students at the 2.5% significance level.