Chapter 6. Measuring an average with uncertainty

We cannot measure anything perfectly: our measurements always include some degree of uncertainty. This chapter explains how we can describe this uncertainty when reporting and interpreting results. Specifically, we introduce the idea of ‘standard error’ and ‘confidence intervals’.

These concepts apply widely.  Most obvious, you may be familiar figures that present mean values with error bars (usually standard error of the mean, or a 95% confidence interval).  However, these concepts extend to statistical tests. 

For example, imagine we wished to test the null hypothesis that the average value of subjects in group ‘A’ equaled the average of subjects in group B (e.g, using a t-test; see Chapter, ‘Comparing averages’).  In this example, we can think of the analysis a way of estimating the magnitude of the difference between the averages of the two groups in a way that accounts for uncertainty:  the less uncertainty we have in our estimate of this difference, the greater our ability to judge whether the means of the two groups differ. 

In future chapters we’ll express and interpret this idea as ‘effect size’, and continue to use the concepts of ‘standard error’ and ‘confidence interval’ introduced in the present chapter.

The main point is that measuring things in a way that accounts for uncertainty lies at the heart of both data presentation and analysis.  This chapter introduces standard error and 95% confidence intervals as fundamental concepts in this light.

The attached Powerpoint presentation provides questions that review basic concepts from this chapter.  Note that the questions sometimes present more than one correct answer, and sometimes all the options are incorrect!  The point of these questions is to get you to think and to reinforce basic concepts from the videos.  You can find the answers to the questions in the ‘notes’ section beneath each slide.

Please view file ‘Quiz_MeasuringWithUncertainty’