This chapter explains what power analysis is and why it is essential to design a study. In short, low-powered studies cannot be trusted. This begs the question, how often are studies insufficiently powered? As we discuss, at least within the biological sciences, it appears that most published studies are underpowered. We also discuss the fallacy of the argument, “I know my study was underpowered, but my results reveal a large effect size so my results must be important!”.Power analysis typically aims to design studies with a high probability of obtaining a p < 0.05 when an effect of a given size exists. Those who have read / watched the Chapter, “Abandon statistical significance” might wonder whether this approach (p < 0.05) remains appropriate. The answer is, yes, albeit with a different interpretation: power analysis with a goal to achieve p < 0.05 is equivalent to designing a study to have a high probability of detecting “moderate” or “suggestive” evidence for an effect. Researchers can use a smaller cutoff p-value (e.g., p < 0.005) to design studies that can provide “strong of substantial evidence” for an effect. Power analysis is usually conducted using available software (e.g., G*Power, or the ‘pwr’ library in R). These approaches are extremely useful and we demonstrate some power analyses using G*Power in this and following chapters. However, we also follow Colgrave & Ruxton’s (2021; see recommended reading) example of using simulations to conduct power analysis. While simulations require a bit of extra effort, they also allow you to do things that cannot be done by standard software. For example, when using standard software, power analysis for a 1-Factor General Linear Model (and more complex models, too) is based on the p-value for an overall effect of a factor, not on the analyses of post-hoc tests (e.g., Tukey tests). Therefore, a power analysis based on standard software can easily lead a researcher to design a study that has high power to detect a factor’s overall effect, but lower power when conducting post-hoc tests. This may be undesirable, and can be remedied using simulations. Alternatively, simulations allow a researcher to focus on a goal other than obtaining p < 0.05 when designing their study. For example, one can design a study with high power to estimate an effect size with a given level of precision. This approach has several advantages, discussed in this Chapter. Finally, a researcher can use simulations to conduct power analysis for any type of analysis, while standard software works for a limited set of analyses. For example, the Chapter, ‘Mixed effects models’ provides resources to conduct power analysis (via simulations) for such models, while software like G*Power cannot perform power analysis for mixed effects models.By the time you complete this chapter, we expect you to appreciate the following argument: an experiment will always yield useful results if it has high power to detect the smallest effect size that is of biological interest. As will become clear, such experiments have a high probability of detecting an effect if it exists (and in this case, should be easy to publish). On the other hand, it is also very interesting when such an experiment fails to detect an effect because the researcher can argue that, if an effect exists, it is likely too small to be of biological interest. Hence, studies with high power to detect the smallest effect size of biological interest always yield interesting (and likely publishable) results, so long as the hypothesis is interesting and the experiment is well designed, generally.Ceren Erdem on Power AnalysisIn this video, PhD student (University of Edinburgh), Ceren Erdem, shares her experience with power analysis. Document Transcript - experimental data - ceren erdem (2.11 KB / TXT) Power Analysis: IntroductionAn introduction to Power Analysis: What is statistical power? What determines power? How can calculating statistical power help design experiments? Document Experimental data - What is power analysis (334 KB / PPT) Document Transcript - what is power analysis (15.93 KB / TXT) Determining sd and effect size for power analysisBefore conducting a power analysis, we need to know i) the amount of expected inherent variation (measured as standard deviation) in our data and ii) a sensible effect size to consider. This video provides advice for determining both measures. Document Experimental data - finding sd effect (4.55 MB / PPT) Document Transcript - finding sd effect (29.39 KB / TXT) How can we increase an experiment's statistical power?Increasing sample size should be a last resort to increase statistical power. This video discusses other options to increase power. Document Experimental data - how to increase power (280 KB / PPT) Document Transcript - how to increase power (7.18 KB / TXT) Why low power affects Type 1 error ratesThis video explains "Why most published research is wrong". Document Experimental data - low power type 1 error (3.87 MB / PPTX) Document Transcript - low power type 1 error (20.64 KB / TXT) Low Power and the Winner's CurseAnother problem with low power is that it leads to inflated effect sizes: we discuss and demonstrate this effect. Document Experimental data - inflated effect size (439.5 KB / PPT) Document Transcript - inflated effect size (18.52 KB / TXT) Low Power leads to "fickle" p-valuesImagine you conduct an experiment and obtain a p-value. If you repeated the experiment, would you expect to obtain a similar p-value? And would the answer depend on power? We discuss. Document Low power and p values (872.5 KB / PPT) Document Transcript - low power and p values (11.27 KB / TXT) Document Halsey et al (2015) (798.01 KB / PDF) Power analysis for t-testWe demonstrate power analysis for a 2-sample t-test using G*Power Document Experimental data - power by simulation t-test (1.04 MB / PPT) Document Transcript - power analysis t-test (6.17 KB / TXT) Power Analysis for t-test via SimulationsSimulations can provide more flexibility than conventional software to conduct power analyses. We demonstrate this approach for a t-test. Document Experimental data - power by simulation t-test (1.04 MB / PPT) Document Transcript - power by simulation t-test (11.21 KB / TXT) Document Paper on Power Analysis by simulation: t-test by Dr Crispin Jordan (233.46 KB / PDF) Power Analysis without p and 0.05: design experiments for precisionThis video describes an alternative approach to power analysis: instead of using p < 0.05 as a criterion for a 'successful' experiment, we can design our experiment to estimate an effect size to a desired level of precision. This video shows you how. We also discuss how this perspective can sometimes avoid the need to decide upon a desired effect size. Document Experimental data - power simulating with SE (268.5 KB / PPT) Document Transcript - power simulating with SE (23.63 KB / TXT) Document Paper on power analysis t-test precision (212.15 KB / PDF) Practice problems and answersThis file provides practice to implement power analysis for t-tests using G*Power and includes a tutorial for G*Power. Document Experimental data chapter 11 tutorial (231.35 KB / PDF) For extra practice, try to implement these power analyses using simulations as well! To do so, you can modify the code provided in the file downloadable with the above video (Power Analysis for t-test via Simulations). Document Experimental data chapter 11 Questions (14.33 KB / DOCX) Document Experimental data chapter 11 Answers (14.7 KB / DOCX) Power analyses performed after having completed an experiment.Please note that we do not discuss “post-hoc” power analysis because we do not believe it is useful. Please see article for reasons why: Document Experimental data -The abuse of Power (402.06 KB / PDF) Recommended reading:Colegrave & Ruxton: Power analysis: an introduction to the life sciences. This invaluable book is written to be accessible to undergraduates; it explains what power analysis is, why it is essential, and introduces fundamentals of methods to conduct power analysis for virtually any type of analysis . Document Article in Nature Reviews Neuroscience (750.67 KB / PDF) Document Article in Nature Methods Halsey et al. (2015) (879.12 KB / (2015)) Document Article in PLoS Medicine Ionnidis (2005) (249.64 KB / PDF) This article was published on 2024-08-05