Pseudoreplication is a common and serious problem. Pseudo-replication occurs when an analysis treats a dataset as if the sample size is larger than is appropriate. When this happens, p-values and standard errors tend to become smaller than they should be and a researcher is more likely to conclude that their data demonstrate a convincing effect for something when, in reality, there is no effect at all.
Some readers might pause at this point and ask themselves, “But, isn’t the sample size simply equal to the number of measurements that were made?”. The answer is, “Sometimes yes, sometimes no.” The distinction lies in the concept of statistical “Independence”, which we address in this Chapter.
I remember, way back in 1992, during my undergraduate degree, our Professor for experimental design and analysis introduced us to pseudoreplication. They cited a (classic) paper published in 1984 which highlighted the issue of pseudoreplication in ecology studies, as their motivation to teach this topic. My (anecdotal) experience is that disciplines within biology teach pseudoreplication to varying degrees. That said, the concept of, and issues surrounding pseudoreplication, are indispensible for all experimental biology sub-disciplines.
What is pseudoreplication, generally? How does it impact research? How commonly does it occur in biology?
**No slides or transcript**
Two perspectives on independence
An examination of experimental designs; we discuss why features of a design may be prone to introducing pseudo-replication, how pseudo-replication can influence analyses and interpretation of results, and how pseudo-replication is related to confounded variables
What can lead to non-independence in experiments? We discuss several common causes of non-independence.
We examine a variety of published results and search for evidence of pseudo-replication
Ruxton & Colegrave: Experimental Design for the Life Sciences. Chapter 5.