I have a quick question for you all. I have done a repeated measures ANOVA for hypothetical data in my Stats class. There were 5 subjects.. all subjects rated their mood on a cloudy day, then they all rated their mood on a sunny day. In sum.. they all underwent the same order of reporting mood.

I found a significant effect of weather. Where I'm stuck is where my prof asked me "how sure can i be of my results?" I know drawbacks to repeated measures designs, but I'm not sure what to say beyond the usual "There could be spillover" problem. Thanks so much for any help you can offer!

To assess the strength of your results and how sure you can be of them in the context of your repeated measures ANOVA, there are a few factors you can consider. Here's how you can approach it:

1. Effect size: One way to evaluate the magnitude of the effect is by calculating an effect size statistic. For repeated measures ANOVA, common effect size measures include partial eta-squared (ηp^2) or Cohen's d. These measures provide an indication of the proportion of variance explained by the independent variable (weather) in relation to the total variance.

To calculate partial eta-squared, divide the sum of squares for the weather effect by the sum of squares total. The resulting value ranges from 0 to 1, where higher values indicate a larger effect. Cohen's d, on the other hand, compares the mean difference between conditions to the standard deviation of the differences. A larger effect size suggests stronger evidence supporting your findings.

2. Statistical significance: You mentioned that you found a significant effect of weather in your analysis. Statistical significance indicates that the observed effect is unlikely to have occurred by chance. The p-value associated with the test can help you determine the level of significance. Typically, a p-value less than 0.05 is considered statistically significant.

3. Sample size: The number of subjects in your study can affect the generalizability of your results. In your case, having only 5 subjects might limit the generalizability. A larger sample size would increase the power of your analysis and enhance confidence in your findings.

4. Potential confounding variables: Consider potential confounding variables that could have influenced the results. For example, if the subjects were not randomly assigned to experience the weather conditions, there might be some underlying differences between them that affected their mood ratings. This could be a limitation in terms of ensuring the reliability and validity of the results.

5. Replication and reliability: Replicating the study using a similar design with a different sample can help assess the reliability of the findings. If other researchers obtain similar results with different participants, it strengthens the confidence in your findings.

When discussing the sureness of your results, it's important to acknowledge the limitations of your study design, such as the small sample size. Additionally, consider the practical implications of the effect you found. Is the effect size large enough to be meaningful in real-world contexts? These considerations will help provide a comprehensive assessment of the strength of your results.