Okay, over our series of videos, we've now considered a number of different scenarios for analyzing our data. We've considered scenarios where we want to know whether or not some factor influences some response that we've, that we've measured. We've also considered scenarios where we might have a continuous independent variable and we want to know how it influences our response variable. In this video, we're gonna talk about how we can combine those approaches. So we're going to talk about, we're going to introduce the idea of combining both factors and continuous independent variables in a single general linear model. This type of approach at 1 was called and Cova, or analysis of covariance. More recently, we just tend to describe these models, a general linear models that have both factors and continuous independent variables. The purpose of this video is to really give a general introduction to this approach of analyzing the data. And you really highlights the types of biological questions that we can address with this type of analysis. And we'll end just a very brief comparison of this approach with what we looked at earlier with multi-factor general linear models. So I think it's fair to say that at least some, if not potentially, all biological variables that we might be interested in are likely to be influenced by both factors and continuous independent variables. So for example, blood pressure, we might expect it to be a function of both sex and body size. Okay, and that is just illustrated that here with this snow mouse, where we have females and males where I'm imagining that in this population, females are larger than males. I don't know if that's true actually. But I'm making this up. But we can also see that we have variation within, within females and males as well. Ok, so we might be interested in studying how sex and variation in body size influence blood pressure in the snow mouse, OK. And the question is, can we analyze both of these types of various variables simultaneously? And the answer is yes, as you might guess, otherwise, we wouldn't be talking about this. Okay? These types of analyses can be viewed from multiple perspectives. And we're going to start at this point just by considering those different perspectives. So the first perspective that we'll consider is that we can use these models just to compare the slopes of the covariates between factors. And I've tried to illustrate that here, where we have some response variable that's pictured on the y-axis and some continuous independent variable we put along the x axis. And I've plotted the data for two different levels of a factor. This might be females and males. It might be two different genotypes. It could be any two categories of things that would interest us. Where one level of factors in red and the other level is in blue. And you can see that if we fit lines to these data, it's plausible that these lines might have different shapes. Ok. And so are one perspective that we could take on an analysis that considers both the effect of the factor and the considers the effect of the covariate is to ask, what is the relationship between our covariate and our response separately for each level of our factors. So what is the relationship between x and the response for our red data? And what is the relationship for our blue data. Following on from that, we can ask whether or not the slopes of these two different lines are different from one another. And in that context we would call that we're testing an interaction between our continuous independent variable and our factor. Okay? Our second perspective is that we could use these types of analyses in order to compare between two different levels of our factors. So to compare between our blue level and are red level of some factor. So females and males are two different genotypes or individuals that experience stress versus a benign environment, whatever. And we might want to ask, what? We want. Might want to ask whether or not the average level of the response for the blue factor or the blue level is different from that for the red level after accounting for the effect of the covariant. Okay? So we're going to consider this, this idea in more detail meant to start this out. I want you to consider what this distribution of data would look like if we did not take the covariates effect into account. And we can get that sense of what the data would be distributed like if we did not take the covariate into account simply by sliding every single data point over to the right, okay, which is what I've indicated with these arrows. So for these data here on the right with the open circles, I want you to basically forget that they're lying on this x axis. Okay? So I'm not meaning by plotting these data here at this point in the x axis, I'm not meaning to imply that these data correspond to a specific value of the x. I'm just trying to push them to the side so you can get a different view of them. You can see I've taken this data point, pushed it over there. This datapoint posted over there, this blue datapoint pushed over there, et cetera. This is what our data would look like if we were to take these original data and just plot them, say in a box plot and plot the original plot, the individual values on top of it. And you can see there's a lot of variation in these data. Okay? So this variation here is what we expect to find if we have not taken the effect of the covariate into account. With these types of analyses, we can take the effect of the co-varied into account. However, in order to get a more precise sense of the effect, or the more precise sense of the effect of differences between our two levels of R factor. And they can take the, the effect of the covariate into account. Essentially by imagining that we're taking each data point and sliding it along the point, along the line that it corresponds to and keeping it the same distance. So take this red data point here, for example. You see it's this distance from its red line. And I've just moved it along to this point on the x-axis, keeping the same distance from the, from the line, the red line that it corresponds to. I've done the same thing with this blue point, which happens to fall along the blue line that we fit through the data. And so it just slides right along this line. And we can do that for all the data points. So we can take this data point and slide it down. And this data point and slide it along. And we can move them all to a common point on the x axis, which is what I've done here. And when we do this, we now get a new distribution of the data within each level of the factor. It has taken into account the variation in the x variable in our covariate. Okay? And you can see that these data now have much less variation in them than the original data. And what this can do is this can actually increase our power or increase our ability to detect a difference between our two different levels of our factor. Let's just take another look at these and what I'm gonna do in this next slide. I'm just going to slide all these data points over to the right. I'm spreading them out a bit. Again. The fact that I've produced them over here does not mean that I say that these data points now correspond with this point on the x-axis. That's not what I'm trying to show. I'm just trying to move them to another position so that we can get a better view of them. And you can see that in this case where we have taken the effect of the covariate into account. You can see that there's less variation in these data. Then we hide without taking the covariate into account. So the point here is that if we model a covariate and a factor of simultaneously, the mat can allow us. You compare the levels of the factor with one another in a way that accounts for the variation in the data due to the covariate. And that can make our analyses more powerful. So the biological question that we can ask in this case is due the levels of our factor differ after accounting for the variation due to the covariate. Okay? I want to point out however, that this perspective of sliding the data points along does not make any sense. If the slopes differ between our two different levels of R factor. And that's because if the slopes differ from one another, then the difference between the factors or the difference between the red and the blue is going to depend on which value of R x. We slide them too. So if we slid them to this point on the x axis, like I've started to demonstrate here. Then we might get some intermediate difference between our, our red and our blue. If we slid them down to this point, then because the lines cross here, we would expect on average that there'll be no difference between the blue and the red. If instead we slid all the points to this end point here, they wouldn't, we would expect to find a bigger difference between the blue and the red. Okay? So I just want to point out that this perspective of sliding the data points and looking for an effect of a factor. After you've controlled for the effect of a covariate, that perspective only makes sense if the slopes are the same between your two different levels of your factor or however many levels you happened to have. Okay? So that perspective does not make sense if our slopes are different from one another. Okay, so what kinds of questions can we ask? This just a summary of the things you've already mentioned? First of all, we can ask, what is the relationship between a covariate and the dependent variable? And then we can ask, does this relationship depend on the levels that we have for a particular factor to we're considering. So for example, are the slopes similar among the levels for a factor? And we can test type. You're looking for an interaction between the covariate and the factor. And if the slopes do seem to be similar, we might test that by plotting our data, which is always something we should do. And also potentially and also looking at a p-value. If we have evidence of the slope seem to be similar, then we can compare between the levels of a factor, well controlling variation due to a covariate. I'll just wrap up. I want to quickly compare this perspective, what we've seen previously when looking at models with more than one factor and general linear model. So the analyses that we can produce or that we can conduct with this perspective that we're talking about where you have both factors and continuous independent variables. The analyses are conducted in pretty much exactly the same way as the analyses we've produced before when we have one or more factors. Okay. They're very, very similar. I'm not going to see they're identical, but they're very similar. Assumptions are identical except for one addition. And that is when we are conducting an analysis like we're describing its video, that we need to make sure that the range of our covariates must sufficiently overlap between our levels of our factors. In other words, the range of our X value has to overlap sufficiently between the levels of our x factor, between the levels of our factor. If our data, if our covariates do not overlap at all between our various levels of our factor, then we cannot conduct this kind of analysis. Okay? We can look at interactions between a factor and a covariate, just like we able to look at interactions between factors. If we find an interaction between a faction, a covariant. And this implies that the relationship or the slope between, or the slope of a covariate is going to differ among the levels are the fact that we're looking at. Or conversely, just as true. We can also say that if we have an interaction and that means that the difference between levels of a factor will depend on the value of the covariate we're considering. And we saw that in a slide or two earlier. Okay? Finally, very similar to the situation where we looked at in previous videos with models with more than one factor. If we have a significant interaction, in this case between a covariate in a factor, then we do not try to interpret the main effects the model. Instead we focus on understanding the interaction. And that is exactly what we saw previously at our models with multiple factors. And we'll end this video there. And I hope this video has been helpful and thank you very much.