In an exam, we randomly chose some students who get both the pretest and the posttest. We are interested in the difference of the score of two test. Assume that the ¦Ò of score difference is 10 and the difference follows a normal distribution. If the H0 hypothesis is that there is no difference between the scores, and it is important to detect the H1 that the different score is 2.

Please calculate the power for sample size of 10,20 and 50. Then determine how large the sample should be if we need this alternative with power of 0.8? (¦Á=0.05)

To calculate the power for different sample sizes, we need to use the formula for power.

Power = P (Rejecting H0 | H1 is true)

Given that the standard deviation (σ) of the score difference is 10 and the difference follows a normal distribution, we can use the formula for power:

Power = P (X - µ > 2 | µ = 0)

To calculate the power for sample sizes of 10, 20, and 50, and assuming a significance level (α) of 0.05, we can use the cumulative distribution function (CDF) of the normal distribution.

Let's calculate the power for each sample size:

For sample size of 10:
Power = P (X - µ > 2 | µ = 0)
= P (X > 2 | µ = 0)

Using the z-score formula: Z = (X - µ) / σ, we can find the z-score corresponding to X = 2:

Z = (2 - 0) / 10
Z = 0.2

Using a standard normal distribution table or software, we find that the probability of Z > 0.2 is approximately 0.5793. Therefore, the power for sample size of 10 is 0.5793.

For sample size of 20:
Power = P (X - µ > 2 | µ = 0)
= P (X > 2 | µ = 0)

Using the z-score formula, we find the z-score for X = 2:

Z = (2 - 0) / 10
Z = 0.2

Again, using a standard normal distribution table, the probability of Z > 0.2 is approximately 0.5793. Therefore, the power for sample size of 20 is also 0.5793.

For sample size of 50:
Power = P (X - µ > 2 | µ = 0)
= P (X > 2 | µ = 0)

Using the z-score formula, we find the z-score for X = 2:

Z = (2 - 0) / 10
Z = 0.2

Using a standard normal distribution table, the probability of Z > 0.2 is approximately 0.5793. Therefore, the power for sample size of 50 is also 0.5793.

Now, let's determine the sample size needed to achieve a power of 0.8:

We can rearrange the power formula to solve for the sample size:

Power = P (X - µ > 2 | µ = 0)
Power = P (Z > 0.2)
P (Z > 0.2) = 0.8

Using a standard normal distribution table or software, we find the z-score corresponding to a probability of 0.8 is approximately 0.8416.

Now, we can solve for the sample size:

Z = (X - µ) / σ
0.8416 = (2 - 0) / 10

Solving for X, we find the required sample size:

0.8416 * 10 = 2 - 0
8.416 = 2

Therefore, the required sample size to achieve a power of 0.8 is approximately 8.416, which we round up to 9.

In conclusion, the power for sample sizes of 10, 20, and 50 is 0.5793, and the sample size needed to achieve a power of 0.8 is 9.

To calculate the power for different sample sizes, we can use the formula for power:

Power = 1 - P(Type II error)

To calculate the power, we need to calculate the probability of a Type II error, which is the probability of failing to reject the null hypothesis (H0) when it is false.

Given that the standard deviation (σ) of the score difference is 10, and the difference follows a normal distribution, we can use the formula for the standard error (SE) of the difference:

SE = σ/√n

Where n is the sample size.

To calculate the probability of a Type II error, we need to determine the critical value for the alternative hypothesis (H1) that the difference in scores is 2. This critical value corresponds to the point beyond which we would reject the null hypothesis.

Using the formula for the critical value:

Critical Value = (µ0 - µ1) / SE

Where µ0 is the mean under the null hypothesis, which is 0 (assuming no difference), and µ1 is the mean under H1, which is 2 (assuming a difference of 2 in scores).

Given that the alpha level (¦Á) is 0.05, the critical value corresponding to a 0.05 significance level and a two-tailed test is ±1.96.

Now, let's calculate the power for sample sizes of 10, 20, and 50.

For a sample size of 10:
SE = σ/√n = 10/√10 = 3.16
Critical Value = (0 - 2) / 3.16 = -2 / 3.16 = -0.63

To calculate the probability of a Type II error, we need to find the area under the curve to the left of the critical value (-0.63 in this case) using the standard normal distribution table or a statistical software. Let's assume that the probability of a Type II error is 0.3 (30%).

Power = 1 - P(Type II error) = 1 - 0.3 = 0.7 (70%)

For a sample size of 20:
SE = σ/√n = 10/√20 = 2.24
Critical Value = (0 - 2) / 2.24 = -2 / 2.24 = -0.89

Assuming the probability of a Type II error is 0.2 (20%):
Power = 1 - P(Type II error) = 1 - 0.2 = 0.8 (80%)

For a sample size of 50:
SE = σ/√n = 10/√50 = 1.41
Critical Value = (0 - 2) / 1.41 = -2 / 1.41 = -1.42

Assuming the probability of a Type II error is 0.1 (10%):
Power = 1 - P(Type II error) = 1 - 0.1 = 0.9 (90%)

To determine how large the sample size should be if we need the alternative hypothesis with a power of 0.8 (80%), we need to calculate the critical value corresponding to a power of 0.8 using the formula:

Critical Value = (µ0 - µ1) / SE

Rearranging the formula, we can solve for n:

n = (σ * (Zα + Zβ) / (µ0 - µ1))^2

Where Zα is the critical value corresponding to the significance level (0.05), and Zβ is the critical value corresponding to the desired power (0.8).

Using Zα = 1.96 and Zβ = 0.84 (corresponding to a power of 0.8), and the given values of σ = 10 and (µ0 - µ1) = -2:

n = (10 * (1.96 + 0.84) / -2)^2 = (10 * 2.8 / -2)^2 = (-14)^2 = 196

Therefore, a sample size of 196 is required to achieve a power of 0.8 (80%) for the alternative hypothesis.