If random samples, each with n = 36 scores, are selected from a normal population with m = 80 and s = 18, how much difference, on average, should there be between a sample mean and the population mean?

To determine the average difference between a sample mean and the population mean, we can use the formula for the standard error of the mean (SEM):

SEM = s / √n,

where s is the standard deviation of the population and n is the sample size.

In this case, the population standard deviation (s) is given as 18 and the sample size (n) is 36.

So, the standard error of the mean (SEM) is:

SEM = 18 / √36,
= 18 / 6,
= 3.

Therefore, on average, the difference between a sample mean and the population mean should be 3.

To determine how much difference, on average, there should be between a sample mean and the population mean, you need to calculate the standard error (SE) of the sampling distribution, which represents the average variability between sample means and the population mean.

The formula to calculate the standard error is as follows:

SE = s / √n

Where:
- s is the standard deviation of the population,
- n is the sample size.

In this case, the given information is:
- m = 80 (population mean)
- s = 18 (standard deviation of the population)
- n = 36 (sample size)

Now, we can calculate the standard error:

SE = 18 / √36
SE = 18 / 6
SE = 3

Therefore, the standard error is 3.

This means that, on average, the difference between a sample mean and the population mean should be approximately 3.

On average, I would say it would be zero. The more samples you have, the more it will approximate the population mean.