Is an effect size or p-value more important in replication studies?

In replication studies, both effect size and p-value are important, but they serve different purposes.

Effect size measures the magnitude of the observed effect or relationship between variables. It provides an indication of the practical or substantive significance of the findings. Effect size is typically reported as a numerical value, such as a correlation coefficient or standardized mean difference. In replication studies, a high effect size suggests that the finding is more reliable and robust, as the effect is consistent across multiple studies.

On the other hand, the p-value is a statistical measure that quantifies the strength of evidence against the null hypothesis. It indicates how likely it is to observe the obtained results (or more extreme) when there is no true effect in the population. A p-value can range from 0 to 1, with a smaller value suggesting stronger evidence against the null hypothesis. In replication studies, a significant p-value (usually less than 0.05) indicates that the observed effect is unlikely to be due to chance. This suggests that the finding is replicable and not simply a result of random variation.

So, both effect size and p-value have their importance in replication studies. An emphasis on effect size helps understand the practical significance of the findings, whereas p-value helps assess the statistical significance and replicability of the effect.