for each of the following, assume the 2 samples are selected from populations with equal means and calculate how much difference should be expected, on average, between the 2 sample means.

a)each sample has n = 5 scores with s^2 = 38 for the first sample, and s^2 =42 for the second.

b)each sample has n =20 scores with s^2 = 38 for the first sample and s^2 = 42 for the second.

To calculate how much difference should be expected, on average, between the two sample means, we need to use the standard error of the difference in means formula, which is:

Standard Error (SE) = √[(s1^2 / n1) + (s2^2 / n2)]

where s1 and s2 are the sample standard deviations of the two samples, and n1 and n2 are the sample sizes of the two samples.

Let's calculate the expected difference in means for each scenario:

a) For the first scenario, each sample has n = 5 scores with s^2 = 38 for the first sample and s^2 = 42 for the second sample.

Using the formula, the standard error (SE) would be:

SE = √[(38 / 5) + (42 / 5)]
= √[7.6 + 8.4]
= √16
= 4

Therefore, the expected difference between the two sample means for this scenario is 4.

b) For the second scenario, each sample has n = 20 scores with s^2 = 38 for the first sample and s^2 = 42 for the second sample.

Using the formula, the standard error (SE) would be:

SE = √[(38 / 20) + (42 / 20)]
= √[1.9 + 2.1]
= √4
= 2

Therefore, the expected difference between the two sample means for this scenario is 2.