Okay, in this video, I'm going to walk through how to conduct a power analysis for a one factor general linear model where we have more than two treatments. So not to you but more than two. Okay? Before actually the power analysis, I want to show you something useful that can help you plan for your power analysis. In order to conduct a power analysis, we need to have a sense of the wall of our effect sizes. We're going to make assumptions about those as we go on. But we also need to have information about our expected within group variation or our residual variance. So I want to show you how you can obtain that value. Let's say if you have some pilot data, okay? Ever they're going to show you in this video is imaginary. I'm going to use a dataset that's already in R. In order to show you how to get this estimate of the residual variance just as a toy example, okay? So the code that I'm about to show you may have nothing to do with the code that we provide later on for actually conducting the power analysis. So let's start just by obtaining our data. And again, that's just, That's just because we're using a convenient dataset that's useful for our toy example of how to get the residual variance. So here's the top of our data set. Okay, so we have two different variables here. Wait, and group. The easiest way to get there, or I should say that this dataset includes three separate treatments, which is similar to the situation that we're going to imagine in our power analysis later on. That's one of the reasons why I chose this as a toy, the PlantGrowth dataset as a toy example for our demonstration. What we'd like to do is we would like to be able to obtain the overall estimate of the variance or the standard deviation for the residuals within our treatments. And the way they can get that is first of all, by fitting these data with a model that we, that we would use to analyze them. So let's imagine that we're using these data as a pilot experiment. Okay, so I've called the output of our analysis pilot dot lm music lm function. We first of all specify our dependent variable, which is weight. And then we specify our independent variable, which is group. And then we tow r where to find these data. And we can run that. We're not going to go through the whole process of analyzing the data because that's not really the point. I just, the goal here is specifically to show you how to get to the estimate of the residual variation. If you say summary for our output, then you'll see this term here, which is residual standard error. And we had this value of 0.6234. This is an unfortunate term because this value here of 0.634 is not actually a standard error. This is actually the, a measure of the amount of variation in our residuals in terms of a standard deviation. Okay? And this is exactly the value that we will use, what we would use in simulations to conduct a power analysis. Okay? So that's where you can obtain the number that we need as a description of the variation in our residuals or the within group variation. We can, if you want to, there's a function called sigma which will pull this value out directly. Okay? And it gives a, gives it to us with more significant digits if you wanted. Okay, so that's the first thing I want to show you. So if you had some pilot data and you wanted to be able to use those pilot data in order to estimate the within group variation. This is how you would do it. Okay? Now, with this estimate of the residual of the within group variation in hand, we're going to start conducting our simulations. The first thing we'll do is we'll make note of this value. I'm just going to copy it. And we're going to make an object which we'll call S T or SD within. And we'll just assign it that. Okay, so this object here is where we're storing the information about the within group variation as a standard deviation. Okay, Now we're going to start actually conducting our simulations. Now. Before I do that, I just want to say that much of what we're going to do in this video follows the same procedure as we used for a simulation approach to a power analysis for t-tests. So because I've already made that other video showing a similar approach for t-tests. I'm not going to go into the full explanation of all the code in the same way that I did for the t-test video just because that would be redundant. Instead, I'm going to focus on explaining the issues that will be new to you. Okay? Just to make this video a little bit more streamlined. So we need to create a dataset. Let's imagine that our experiment has three groups. And we're going to say one group is a, call it control. And we'll say, we want to decide on the control, the mean of the control group. Okay, Let's say the mean is 10, okay, released. This is what we expect the mean value to be based on prior information. And then let's say we have two other treatments, will say treatment, treatment 1. We want to specify its mean value. Let's imagine that we expect there to be an effect size of a half. But for this treatment, the effect size will cause this mean to be smaller. So we'll say 0.95. And then let's say we have a second treatment where the mean is going to go be, It's going to be a half unit higher than the control. So both of these means for treatment 1 and treatment 2 are half units different from the mean of the control, but one's lower, one's higher. So this is all the information that we need to describe the data. Okay, we have our mean values and we have a standard deviation. And that's all that we need in order to pull random numbers out of a normal distribution. However, if we're going to pull numbers out of a normal distribution, we know how many numbers to pull out. Let's say, let's call this value reps per group. Okay, So this is going to be our, excuse, a more familiar term. Let's say n per group is our sample size within each group. And let's make that 10. Okay? So let's now make a vector that will contain the labels for our different treatments. And again, we've, I've explained how to do this in the previous video on for t-tests. So to do this, we're going to create, are going to create a vector, uh, use small c originally I put a big C there. And so we're going to use the repeat function. We're going to repeat. Where am I? The name of this first group is going to be. We're going to be control and we wants to print control for as many replicates or for as many times as our sample sizes for this treatment group. And that's equal to this, our N per treatment. So we're going to be repeating this 10 times. Whoops. I forgot to specify what that value is. That's why that didn't work. And so that's what we get. Okay. And we'll just repeat this. I shouldn't say repeat, we'll copy that and copy that. Oops, what just happened there? Like keyboards, add a control to know if you run this. And if we look at treatments, this is what we get. When can we get 30 different entries, ten for each of our treatments? So that's part of our dataset. We now want to specify the one I want to assign the data for each of these. So we're going to call this, Let's keep it generic. We'll just call our output y VAR for y variable. Okay? And I'm going to create a vector. And we're going to use our, our norm function in order to pull random numbers out of a normal distribution. So we're imagining our data are normally distributed. And we need to say how many data points to pull out. Well. That's how many we want. We want this number of data points in each of our groups. And let's pull the data for the control group first because that's it we've listed here first in our array of our treatments. So we want ten data points that come from a normal distribution that has the mean of the control. And that has the standard deviation. That's common to all of our groups. Okay? So I'll just show you what that looks like. Whoops. Forgot to. That's not what I wanted at all. So this is one of the problems with copy and pasting because it, you can make stupid mistakes like that. Okay? So I'm actually doing something fairly poor practice when I'm doing all this copy and pasting. Because copying and pasting can lead you into trouble because it can cause you to not read everything that you've done. And I just made that mistake. There we go. Alright. So and it ran the code before it our complained because I didn't actually know what the control mean and standard deviation within meant. So I had to give it these values. Okay, but there is an example of 10 random numbers we get for the for the control group. Seamless repeat that. Being very careful with our pasting to make sure we don't paste things. We don't want specifying a different mean values. So all I've done here is I've used exactly the same code except I've just changed what mean value we're going to be using for, for cheap and one and for treatment 2. Notice that we set up our treatments vector. We specified the control first, then treatment one, treatment two. To be consistent. We're replicating our data, 10 data points from the control group. Then for treatment 1, then for treatment to this are these two vectors are consistent with one another. Okay, so now let's just run this to make sure we haven't done anything silly. And there we go. We have our 30 data points. Okay? Now, within our code, what we want to do is we're now going to create a DataFrame. We'll call it cmd data. And create DataFrame just by listing treatments and Y-bar. Let's just look at that. Okay. So that's what our dataset looks like, our first randomly simulated data set. Now, as for our simulations, we wants to analyze these data. Okay? So we're going to use the lm function. So I'm going to call the output for the lm function sim dot lm. And so we're going to say lm and our y variable, our dependent variable, we called y var, say y bar is and then Tilda. And then I should just let R do the typing for me. And then data equals some data. Okay? So when we run lm, we want to first specify the dependent variable, which is y var. And then we specify the independent variable which is treatments. And then we can tell it where we wear are confined to these data. We put them in this DataFrame. Strictly speaking, we didn't actually have to create this DataFrame. We could have left this out. And R would know the data in these vectors already. And so what we could have done is we could have left out, whoops. We could have left out this line of code because if we just gave are this bit and didn't say data equals sim, it would still know what data we're in these columns. So you might ask, Okay, well then crisp and why, why did you make this DataFrame? To be honest, I just did it to be consistent with how you're likely used to running linear models. And so I just wanted to keep it consistent with your previous experience. Okay, so that's how we analyze our data. Now we want to obtain a p-value. We're not going to do anything about checking the assumptions because we've created the data in a way that will meet the assumptions. Specifically, we've drawn the data for normal distribution. We know the data are randomly sampled. We know that they're independent, and we know that the variance within the groups is equal. So we ease, There's no point in checking entity assumptions because we've created the data in a way that they will meet the assumptions. So all we're really interested in at this point is obtain the p-value. And we can obtain the p-value by providing the output from our linear model to the, to the function anova. Okay, whoops. And so let's just run that. Okay. You can see we have lots of output here. We only want that bit of output. So how can we get it? Well, what we can do is we can go into the guts of this output by using square brackets, just like how we can use the square brackets to go into the guts of a DataFrame. And what we're gonna do is we're going to look at the first row of this output. Okay, so we just say one comma. Remember something before the camera refers to rows. So if we just look at the first row of output, you can see that now we just get this. Okay, So we have lost that row. So now we have for this output 1, 2, 3, 4, 5 bits and output. Okay, let's see what happen if we put one here. So this would be looking in this row, in the first column, we get to mass because that's this point here. It's the information that's in the first column. So if we want the p-value, we can go to the 1234 fifth column. And so we specify five after the comma. And there we go. You can see this p-value and the output matches p-value there. Okay? Now we want to save this p-value, okay? And so we're just going to create a new object, which we'll call p. And we're going to go through a slightly different process for counting the number of times we get small p-values for this video compared to what we did in the T-test video. Okay. And the T-test video, we used a counter in order just count the number of times that a p-value is less than 0.05. We could do that here. I'm just going to show you for the sake of variety, I'm going to show you an alternative method for doing that. Okay? What we're gonna do is we're just going to create a vector where we are going to store all of our p-values from this output. So call this ANOVA vector p case. I'm going to start just by creating an empty vector. Okay, that's we've done here by using this code. Now what we want to do is we want to store this p-value in this vector. So to do that, what we're gonna do is we're going to take for, so we'll create this vector first. If we just look at its contents, there's nothing in it. We want to take this empty vector and add this p-value to it. The way we can do that is like this. I'll write this and then explain it. What we're gonna do here is we're going to use the append function, which allows are to take one thing, append or stick on the end something else. So what we're doing here is we're taking our original vector, which is called ANOVA vector p. And we're adding onto it the p-value that we just extracted from our output here. And then we're saving this back in the original name ANOVA vector p. Okay? So what this code essentially allows us to do is to take our original vector and just tack on our new p-value to make, to add an additional value to this vector. Stuff you run this. Oops, I forgot to. Let R know what p is. So now you can see them. We look at the contents of a nova vector p. We now have added our p-value. Okay? So that's excellent. This part of our analysis allows us to keep track of all of our p-values for our overall linear model analysis. If you were to conduct a power analysis using some other software, for example, g power, then this would be the p-value or this would be the p-value that g power, for instance, would be focusing its power analysis upon. We're going to do something extra. Often when we analyze our data from a general linear model, we're not only interested in this overall p-value from our linear model, we're also interested in evidence for differences among specific groups. And we obtain those estimates by using specific forms of contrasts. So we use specific ways of comparing values among our various groups. There's lots of ways of doing that. So far in the videos that I've shown you. So for example, in the chick weight data, we just use pairwise comparisons where we just compared the mean of one group to the means of all the other groups. And we did that for all possible comparisons. Okay, that's a common approach. There are many other approaches as well. In this video, I'm just going to show you how we can look into this power analysis further by, let's say, focusing on two specific comparisons between our groups. Remember we had a control group, achievement one group and a treatment to group. Let's imagine that we're specifically interested in comparing each of our two treatment groups versus the control group. Ok. So we would only do that presumably if we had good evidence that there was an effect at, at this level, at the level of the whole model. Okay? So we're going to write some codes that will cause us to only perform those pairwise comparisons if this p-value is sufficiently small. Okay? So we're going to include an if statement. So if p is less than 0.05, okay? Remember this is, I have, have videos saying why it's not wise to interpret data in terms of statistical significance. So why am I doing this? Well, I'm, my interpretation of this is to say that if we have a p-value that gives us at least a moderate evidence for an effect. Then we would go on to do these pairwise comparisons. We don't have to do this. We could leave this out and we could do the pairwise comparisons. Even if our p-value up here ends up being large, and that's totally fine. That's totally legitimate. In fact, the power that we obtain at the very end of this experiment really doesn't take this part into account. Okay, I know that sounds vague. Might say what is crisp and talking about. All try to remember to explain, to clarify what I just said at the end of the video. Okay. For now, let's just recognize that we're only going to perform these pairwise comparisons if the p-value from our overall linear model sufficiently small. Okay? So we're gonna say if the p-value is sufficiently small, then we want to make comparisons among our, among our various groups. And we're going to use a function called pairwise t-test. And to do this, we just list the data from our two different vectors. So first of all, we give our y data, which is y var. And then we give our, sorry, so this is our dependent variable. And then we give our independent variable. Lastly, we can specify whether or not we want to adjust the p-value in some way. Okay? And I'm going to say none. Just for the sake of simplicity. Let's just stop and talk about this for a moment. Because this aspect here opens up a whole can of worms. They don't really want to get into in this video. But I want to explain what this means in other videos. So for example, in our analysis of the chick weight data, you might remember that when we performed post-hoc tests on our overall general linear model, we ended up performing a Tukey's test. And what the Tukey's test does is it performs comparisons among the various means of our, of our factor. But it adjusts the p-value in such a way to maintain an overall probability of a type one error being 5%. Okay? Here, I'm telling odd to not adjust the p-value, okay? And I'm doing that simply for, I'm just going to sample can do it simply for simplicity. I take that back. I'm not doing this just for simplicity. Basically I'm doing this to streamline the video as much as I can. There are. And how is the streamlining the video? Well, this is streamline the video because there are a number of different ways, are a number of different methods you could choose to adjust the P-values. And I don't want to have to go down that route in this video because that's not really the point of this video. So to circumvent that bigger conversation, I'm just saying here that we're not going to worry about adjusting the p-values. I will also say though, that whether or not we adjust p-values in general is actually a wide area of discussion. Within the area of statistics. There are different philosophies on whether or not we should adjust our p-values for multiple comparisons. Some people say yes, some people say no, depended on different perspectives on the process of analyzing the data and different philosophies. When, if, if someone believes that you should adjust p-values, that doesn't necessarily hold for all possible experiments. So for example, whether or not it's important to adjust p-values can depend on whether or not your comparisons and contrasts you're making are a priori or not. In other words, are these comparisons, comparisons you specifically plans, perform it advance. And it can also depend on something called terrible at spelling this what I mean to spell hears the word orthogonal. So this is another topic that we're not going to go into. The point is here I'm just giving you a couple of buzzwords that whether or not the comparisons that we're making, our a priori comparisons and whether or not the comparisons we're making or what's called an orthogonal. This also plays into whether or not we'd want to adjust our p-values. Again, we're not going to talk about those things. The point I'm trying to make here is that there's lots of many other topics to consider in terms of making this decision. We're not going to talk about that in this video. Okay? And so we're just going to circumvent all that discussion by saying P adjust equals none, bang and say that this is something we could talk about at other times. Okay? At this point, I'm going to leave it up to you to decide whether or not you want to adjust your p-values and how you would do it. So let's just run this to see what we get. So what we get here is a series of p-values for all the different combinations. So you can see here this p-value here of 0.90 to nine. That's for comparison between treatment 1 and the control. This value here is a p-value for the comparison of treatment two versus the control. And this p-value here is for the comparison between treatment 2 and treatment 1. Okay? A little while ago before I got my diverged and about this P adjust, I said they were going imagine a situation where we were specifically focused on to specific comparisons between the comparisons between treatment one of the control and treatment 2 and the control. So we want a way to pull out these two p-values and not that one. Okay? So how can we do that? Well, let's save this output here. And something we'll call T out. I'll just run that. And then we'll say T out. If you use a dollar sign, you can see we can pull the P values out of this output. Okay? So there are our three p-values. We can then go into the guts of this using the square brackets again. And if we just say one, we can see that we get the p-value for the comparison between treatment 1 and the control group. Okay, so let's use this code. And we're going to save it in an object which we're calling P for 1 VC's, this represents the p-value for treatment one versus the control. Okay? And then we're going to save this p-value in a vector like we did for the 0, like we did above for the overall p-values for the whole general linear model. Sorry, my brain cut out there for a second. Okay. I'm just going to switch this around because I don't I can't remember whether our legs having a number at the very beginning or with an OT confuses things. So rather than waste a moment with a possible mistake, I'm just going to swap the pizza the beginning. Okay? So we've got a vector where we're going to now save all of our p-values for our comparisons for one versus control. While we're at it, we'll just do the same thing. Create another vector for To versus the control. Okay? So now we're gonna do is we're going to append this p-value, just like we did above. Here we go. So let's just run that. And you see we get our p-value in our vector. Okay? Now so that's how we've gotten our first p-value to get our seconds p-value. So we're looking to store to get a p-value of 0.001 because that was the p-value for the comparison and treatment 2 and the control. So let's just change this value in the square brackets. Where again, we're pulling the p-value out of the output from our pairwise t-test. And there we go. So if we pull out the second value, we get the p-value that we want. You can guess that if we pull out the third one, that's nothing, that was the empty space and our fourth value is 0.002. So going back up here, these p-values are numbered 1, whoops, 1, 2, 3, 4. Case. We want p values 1 and 2. Now I'm going to be very careful here. I'm just copying this code. Just make sure I change everything. So I pulled out the appropriate PPE though. You just double-check. I've done that, right. All right, so we've got that and that's been saved in P2 VC. Got p2 VC. Same thing being added to the vector p TVC vector. I've got that. There's that all looks good. Okay, so if you run, what just happened? If you run that code and we look at p two Vc vector. There we go. We've got that. And that everyone is essentially how we obtain our p-values for one iteration of our simulations. What we want now is you want to automate these simulations to run them many times. So to do that, we're just going to take this code and put it into a for loop. Just like we did in when we did this similar process for a t-test. So I'm going to say for one to n Sims, I've not created and Sims yet. Let's say we're going to do this. 10 thousand times. Here we go. So we want to run this code 10 thousand times. So I'm just going to take all that, cut it out and paste it in there. Okay. So less, just run all this, okay, this is not going to give us the final power yet. But yes, so what did I do? What did I do? Oh, I'm an idiot. I forgot to say for one in. Well, that was a pretty basic mistake. Let's try this again. Da da, da, da, da, da, da da, da da, da. All right, Well that's running. Let's start writing our code in order to calculate our power. Okay, we'll start with calculating our power based on the overall p-value from a whole linear model. We stored those p-values in this vector here. There we go. It's done. All right, so we've done running that code. What we wanna do is we want to count the number of times that our overall p-values are less than 0.05. And again, I want to remind you that comparing things trickling in 0.05 is not always wise. I'm doing this for two reasons. One is just to stick with the general convention for power analyses. So I'm teaching you how to run our analysis in a way that run in the same ways, traditional power analyses, sorry, just spoken circles there. But also, we can interpret this as just saying how many times do, or how often do we get evidence for an effect that is at least moderate in strength in terms of the evidence. So what we're gonna do first of all, is we're going to ask how many times the p-value being less than 0.05. And we're gonna do that by using the function which, okay, So what this will do is this will determine the number of times in which an entry in this vector is less than 0.05. Then we want to count those instances. And we can do that using the length function. And we're going to say, we're going to call this overall small p. Okay? So this is going to give us the overall number of p-values that were less than 0.05 for our main p-value, that was the output of our linear model. Okay, so we're counting the number of times that our p-values here in our simulations were less than 0.05. So let's just run that and let's just look at this value. Overall p small, we get 8,678. So if you want to calculate our power, then we take this number and we divide it by the number of simulations we ran. And that gives us a power of about 86.78%. Okay? So just to recap, what we did here is we just counted the number of times. If you've got a p-value is less than 0.05 from our overall general linear model and divide that by the number of simulations we ran. And that gave us our overall power to detect an effect or to obtain at least moderate evidence for an effect based on this overall p-value from a general linear model. Now, we said that we want to go beyond that. Okay? We said we also wants to determine our power from making these specific comparisons. Comparison, comparing the mean of treatment one versus the control, and comparing the mean of treatment two versus the control. Okay, so we're gonna go through the same process for these. So. I'm just going to write comparable code. Will say overall one V C. And then let's just copy this and do the same thing for our other vector. Okay, so these values here, there and there, those will contain the number of times that we got a p-value. Sorry, I forgot to write finished writing this code. So what I'm doing here is I'm saying taken this whole vector of our p-values for every time you made a comparison between treatment 1 and the control. We're taking all of those p-values and we're comparing each of them to 0.05. And we're asking how many of those p-values cert work. Sorry, we're asking how many the PDA is that we have stored from our comparison between treatment one of the control, how many no's p-values are less than 0.05. And we're storing that number of p-values that are less than 0.05 in this object here. And then we're doing the same thing for the comparison between treatment 2 and the control. But I have to fix that as well. Okay? Now, we can just do the same thing as we did above. Divide that by and Sims. Now, I said earlier that we didn't actually need to include this if statement where we're looking to see if our p-value is less than 0.05. Why is that? Well, that's because down here, when we calculate an overall, our overall power, we are dividing the number of p-values that we got from this analysis, or less than 0.05 by the overall number of simulations we obtained. So we're not dividing this value by the number of times that we got a small p-value. My overall general linear model. We're not doing that. We're saying we're taking the number of p-values that we obtained from our pairwise comparisons and comparing that to the overall number of simulations that we created. And so by doing that, we're kind of ignoring the fact that we only made these comparisons when our p-value for overall general linear model was relatively small groups. And that's because this denominator here does not account for the fact. This just includes all of the simulations, not this is not only the simulations where we got a small p-value for our general linear model. So sorry, I meant to be pointing here if that last conversation not there. All right. So that's why we didn't need that IF statement. Alright, so let's look at these results. This is interesting. Okay? And this is, I think, a good point to end on. We now have three power measurements, okay? This first power measurement here, this value here gives us our power for getting moderate evidence for an effect at the level of the whole general linear model. In other words, when we pull out our p-value from our entire general linear model, we found that in about 87% of our simulations, we got p-values that are less than 0.05. Okay? So here we would say that for the overall analysis RP, we had about 87 percent power. Okay? These values here though, for the, these represents the power based on the specific pairwise comparisons that we made. You can see that at this level are power is much lower. So our power to obtain at least moderate evidence for a difference between treatment one of the control is only about 39%. And our power to detect a difference between treatment 2 and the control is about 40 percent. Okay? These are almost certainly, they should basically be the same number because when we set up. When we set up our experiment, we said basically the effect size, the difference in treatment one of the control GMT you in the control of the same, they both have an effect size of 0.5 is just one of them is negative and one of them is positive. So really we expect this value and this value to be the same. They are only difference because of the stochasticity that's associated with these simulations of the randomness. So basically overall, what we find here is that for our specific comparisons, we have a power of around 40 percent. Whereas the power four. Getting evidence at the scale, the whole General Electric general linear model is very difference around 87 percent. This is really important because this can determine how you design your experiment. If you want to design your experiments so that you obtain strong power at this overall level. So for the p-value that you get for your overall general linear model, then the experiment, as we've said it out here with a sample size of about 10 per group. That's to be great because we get about 87 percent power. If instead, you want to design your experiments that have good power, strong power for the pairwise comparisons. Then you can see that this experimental design with only 10 treatments or sorry, 10 subjects per group. It's not adequate because we have relatively low power, proud of only around 40%. And so on this level, we would definitely wants to increase, we want to do something to design our experiment better. We want to change experiments somehow in order to increase the power at this point, which could include increasing our sample size. Okay? So this distinction between the P-value at the overall level versus the p-value, the level of comparisons is a really important distinction. The difference between these two levels implies that if you have, you know, if this is our power for the overall general linear model, this means that we have strong power to detect something at this overall level. So I feel like I'm talking circles now because this video has been so long, I'm just making myself tired. Okay. Here's what I wanted to say, is what I wanted to say. If you have high power at the, at the level of the overall general linear model, but low power for the comparisons. And you design your experiment to have high power at the overall general linear model level. Then what that can imply is that you'll have high power to detect an effect at that overall general linear model level. However, you can expect that when you go on to perform your pairwise comparisons, you will be unlikely to detach strong evidence for differences between groups. And that might be unsatisfying to you. And so if you want to make sure that you have high power to detect differences using a pairwise comparisons. Then you want to design your experiment in such a way that these levels have good power. One last point to end on. I ran a power analysis based on this experimental design in G power. And it told me that for this type of experiment, so if we had these mean values and this standard deviation, then the experiment, if we had 10, ten replicates or to an N of 10 and each of our groups, G Power told me that our experiments would have 87 percent power. What does that mean? That means that if you perform a power analysis for ANOVA or, or an over one factor general linear model. Jeep power will be performing that power analysis at this level, at the level of the overall pvalue, not at the level of these pairwise comparisons. And that's a really important thing to recognize. So if you use g power or some other software, it's very likely that you will be design your experiment. Strong power at the level of the overall model. And that power analysis will tell you nothing about your power to detect effects in terms of pairwise comparisons. So that's something important to keep in mind. That's as much as I wanted to say. It's relatively long video. Apologies for that. I hope it's been helpful. I'll stop there and I'll say, thank you very much.