The current rate of success is 70% for 200 students. When the new test is implemented, the rate increases to 85% for the 50 students who pass the test and are allowed into the program. Is this difference significant?

I will get you started.

You want see if .85 is different from .70

The numerator of your fraction will be:

.85 - .70 Always subtract the population proportion from the sample proportion.

The .15 has to be divided by the standard deviation. Do you have that formula?

The value you obtain after the division is a z-score.

Depending on how your instructor teaches this concept, obtaining a large z-score might be sufficient to say that there is a difference. Some instructors ask you to find the p-value from a table or your calculator and then compare that p-value with a given alpha of perhaps .05 (which is common). If you p-value is smaller than .05 then you can say there is a difference.

X1 and X2 are the numbers of successes in samples 1 and 2, respectively.

p = (140 + 43) / (200+50) = 183 /250 =0.73
z = (0.7-0.85) /√ (0.73)(1-0.73)(1/200 +1/50) = - 0.15 / √ (0.005) = -0.15 / 0.07 = -2.14
To have a proportional difference to be considered significant (at a significant level of 0.05 and df (248), the z value must exceed -3.30. Because −2.14 does not exceed this value, it can be concluded that there is a significant improvement in the training program's success after the new test.

To determine if the difference in success rates is significant, we can perform a hypothesis test. Here's how we can calculate it:

Step 1: Define the null hypothesis (H0) and the alternative hypothesis (Ha).
H0: There is no significant difference in success rates before and after the new test is implemented.
Ha: There is a significant difference in success rates before and after the new test is implemented.

Step 2: Determine the test statistic and its distribution.
Since we are comparing success rates, which are proportions, we can use the z-test for two proportions.

Step 3: Calculate the test statistic.
To determine the test statistic, we need to calculate the standard error and the difference in proportions.
First, let's find the proportion of success before and after the new test:
Before: 70% of 200 students = (70/100) * 200 = 140 students
After: 85% of 50 students = (85/100) * 50 = 42.5 students

Next, calculate the standard error using the formula:
SE = sqrt(p1 * (1 - p1) / n1 + p2 * (1 - p2) / n2)
where p1 and p2 are the proportions, and n1 and n2 are the respective sample sizes.
SE = sqrt((140/200) * (60/200) / 200 + (42.5/50) * (7.5/50) / 50)

Now calculate the difference in proportions:
Difference = p2 - p1 = 42.5/50 - 140/200

Step 4: Determine the critical value and calculate the p-value.
With the calculated test statistic, we can obtain the critical value from the standard normal distribution (z-table) or find the p-value using statistical software like Excel or Python.

Step 5: Make a decision.
If the p-value is less than the significance level (typically 0.05), we reject the null hypothesis and conclude that there is a significant difference in success rates before and after the new test.

Once you have the values for the test statistic and the critical value or p-value, you can evaluate if the difference in success rates is significant.