If alpha is changed from .05 to .01, it is easier or harder to make a Type I Error?

To determine whether it is easier or harder to make a Type I error when alpha is changed from 0.05 to 0.01, we need to understand the concept of Type I error and the significance level (alpha).

Type I error refers to rejecting a true null hypothesis. In hypothesis testing, we set a significance level (alpha) to determine the threshold at which we consider evidence against the null hypothesis strong enough to reject it. If the observed p-value (the probability of obtaining the test statistic as extreme as, or more extreme than, the observed value) is less than or equal to alpha, we reject the null hypothesis.

Now, if alpha is decreased from 0.05 to 0.01, it means the significance level is reduced. This implies that we become more conservative in rejecting the null hypothesis. In other words, we require stronger evidence to support the rejection of the null hypothesis.

Therefore, when alpha is decreased from 0.05 to 0.01, it becomes harder to make a Type I error. This is because we are raising the bar for what we consider statistically significant and require stronger evidence against the null hypothesis before rejecting it.

In summary, reducing alpha from 0.05 to 0.01 makes it harder to make a Type I error, as it requires more substantial evidence to reject the null hypothesis.

Type I and type II errors. ... In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a "false positive" finding or conclusion), while a type II error is the failure to reject a false null hypothesis (also known as a "false negative" finding or conclusion).

Harder