<img alt="" src="https://secure.perk0mean.com/171547.png" style="display:none;">

[Video] Bayesian Methodology - How to Analyse Multiple Endpoints

In this Statistical Knowledge Share Video our Principal Statisticians, Sonia, presents an example of Bayesian Methodology using simulated data and examines how to analyze Multiple End Points in Clinical Trials.


 
 
Bayesian Analysis Example - Multiple Endpoints from Quanticate

 

Video Transcribe

Multiple End Points

Sonia: "Hi everyone. I hope you can hear me okay".

Sonia: "I'm going to talk to you today about Bayesian methodology. Some of you may recall last year I did a presentation on some Bayesian analysis that was used for a respiratory study. This is the follow up on that".

Sonia: "I'm going to do a recap of the methodology that we talked about last year and then talk a bit about how to analyze two end points together, which we had to do for this particular study".

Study Background

Sonia: "So in the example study, we started off looking at one end point, which was the predicted FEV1 (forced expiratory volume). We wanted that to be high, because that will indicate a better lung function. There were 2 parallel groups, for active and placebo, and they had repeated doses for dose week 1, 4, 8, 12, and baseline was a covariant in the study. There were 2 centres. The primary end point was change from baseline in predicted FEV1 at week 12".

Sonia: "One of the things that the client was interested in was the probability of there being a treatment effect of more than 10%".                         

ANCOVA Model Approach

Sonia: "To start off we used a simple ANCOVA (analysis of covariance)  model to look at change from baseline at week 12. So we've got a proc mixed with treatment and center as categorical variables, change from baseline, looking at treatment, adjusting for center, and this is the baseline for covariant".

Sonia: "And we wanted to get the LS (least squares) means out. You can see the result. So the active was way higher than placebo, with a difference of 9.88. And if you remember that probability we were interested in, which is calculated here. So the probability of this treatment difference of being more than 10 was just less than half".

Simple Bayesian Model

Sonia: "Then we did the same thing, but using proc MCMC. So I will just go through this code. I know a lot of people haven't used MCMC very much. It's quite an unstructured proc. There's no structure. You kind of put in the model yourself. It's quite easy to get wrong, because it will quite often run something, which is just nonsense. So you have to look at the results quite carefully".

Sonia: "The first bit of the code has got the third data set that we're using, selecting just the week 12 visit. This is the number of iterations that we use, and the data set we want the results to come out of. Then we set up the model parameters. So B0 is the intercept, and B1 is the slope or the treatment coefficient and Sigma is the variance".

Sonia: "Then define priors for these parameters. So where I've put C:, that's going to cover any of the parameters that start with a B, just like any intercept. This is a non-informative prior, so it's a normal distribution with a mean of zero and a large variance".

Sonia: "Sigma has a different shape distribution, so Igamma, again, this is non-informative".

Sonia: "Then we define the models. This is your Y=MX+C. So we've got the intercept and the slope multiplied by the treatment variable, where treatment is a one and a zero, or one and a two".

Sonia: "So the change from baseline is going to be normally distributed and this is the mean. And that's the basic model".

Add Adjustment for Baseline

Sonia: "If we want to add any adjustments or baseline, we need to add another parameter. This is a baseline parameter being added here. We don't need to add in the prior, because it's already covered by this B: and then in the model we've got a baseline parameter multiplied by baseline variables".

Add Adjustment for Center (2 Centers)

Sonia: "Now we're adding in center. So for center, I've got like a dummy variable, so one and zero depending on which of the two centers its is. I've got the parameter center coefficients, the prior for the center, and then adding it in here. So this will switch on and off depending on whether it's center 1 or center 2. If you have more centers, you need to add in an upper class variable for each of these centers, so it can get quite messy, which is why we've only got two in this example".

Get Some Output

Sonia: "That doesn't give you any output. So you need to add in what you want to see out. So if you're using 9.4, you need to add statistics equal to all, otherwise it won't give you anything".

Sonia: "Then you need to specify which parameters you want to monitor. This is going to give you results and diagnostic. So I put all of them in, but if you have a model that's got lots and lots of parameters, then this output can get really very large and it makes things very slow and clunky, in which case you want to think about which parameters you really want to monitor".

Sonia: "The middle bit of the code hasn't changed. Then this bit here is really useful. After you've specified your model, you can specify anything you want. So here, it's calculating equivalent to your LS means, so active will be the intercept multiplied by the treatment coefficient times the value for active, and then center coefficient multiplied by half, so we are assuming half in each center".

Sonia: "I've not got baseline in here, because baseline is being centered. So the mean baseline is zero, but otherwise you would need to add that in as well. Same for placebos, and the treatment difference in the centers and the intercept cancel out. The probability that that difference is more than 10, but you can add as many things in here as you want to".

Sonia: "Whatever you define here, you need to add up in this monitor, otherwise it just won't show you anything for it".

Output from SAS

Sonia: "I've got here the results of the ANCOVA that I showed before. This is the same equivalent results from the Bayesian. So as you can see, they're not identical. But they're quite close".

Two Endpoints

Sonia: "Now I'm moving on to new material, so looking at two end points together. We've got the predicted FEV, same as before, and then CRP ( C-reactive protein), in this case a lower result is better, although it can get too low".

Sonia: "If you have co-primary end points, with a traditional frequentist approach it can make things quite tricky; if both endpoints are equally important, you need to think about splitting Alpha between the two endpoints and basically running two separate models. It's quite frustrating if one of them is significant and the other one isn't".

Sonia: "With the Bayesian approach, you can put both of those end points into the same model and analyze them together".

Two Endpoints Decision Criteria 

Sonia: "Then we have decision criteria based on the two end points, which I'll come to later on, which were specified in the analysis plan. At the end of the study, decisions were made based on those criteria".

Sonia: "These are the criteria. The first step was to look as before at the FEV being less than 10. and if the probability of that was pretty high, then that was unsuccessful, because we want to remember the difference being more than ten".

Sonia: "Now if the probability of that FEV was more than 10, and the probability of the CRP difference is less than minus one, if the probability of that was more than 0.25, then that was a really good clear success. So both of the end points showing us that they are doing what we want them to do. That was a robust success".

Sonia: "If we had a pretty high probability, so 0.35 or above, of there being a difference in FEV of more than 10, but not looking at the CRPs, then it was a success".

Two Endpoints Results

Sonia: "The MCMC allows this analysis to be done together, and then you can get quite a funky plot coming out, which I think looks like a fried egg. So here on this scale, the Y axis, we've got the FEV. So if you imagine it in 3D, you've got a normal distribution along here. So this is the peak, and then you've got the tails either side. Then the same, in fact here. If you can picture it, it's like the sideways view of this, if it was raised up".

Sonia: "Then along here you've got the same thing with CRP. So the most likely outcome you're in this quadrant here. Ideally we want it to be here, because that means that the FEV is more than 10, and the CRP is less than minus 0.1. We go through each a step at a time; so the probability of seeing FEV being less than 10, so that's in this section here, is less than 0.75, because it was actually 0.1712. So therefore, the first step is not potentially unsuccessful".

Sonia: "The second step is to say what is the probability of it being more than 10, and also the CRP being less than minus 0.01. So that's in this section here. And the probability of that is 0.2962, so more than 0.25. So therefore in this example of simulated data, the result is a robust success. All good".

Sonia: "But I think although the  analysis is quite complicated, if you imagine you're trying to describe the results to a study team clinician, this is quite a nice representation of the results. Takes a bit of time to get your head around, but seeing the two end points together I think is quite good and helpful".

Two Endpoints Conclusions

Sonia: "So the results meet the criteria for a robust success and the joint modelling approach and resulting Bayesian posterior probabilities makes it easier to interpret results".

 

Bayesian Study Design & Interim Analyses
 

 

Bayesian Methodology 

Sonia: "So the methodology... I won't get too heavily into the stats, but we're assuming that the two endpoints follow a multi-variant normal distribution. So this is a bit like the diagram of the posteriors really".

Sonia: "The way that we modelled it, each treatment arm had a separate variance covariance matrix. No intercept terms were added. The baseline was added as a continuous covariate and it was centered. As I mentioned before, the mean was zero which just . made it a little bit easier to code"

Priors

Sonia: "Non informative priors were utilized for each of the model parameters. Placebo, active, and baseline parameters had a multivariate normal prior distribution, so mean is zero and large variants. So quite similar to the single endpoint".

Sonia: "This is where it gets a bit ugly, but you go through the code a little bit of how to set up the multi-variate normal priors, because it's not completely straightforward".

Sonia: "So this could be like the mean, and it had two elements, so one for each of the end points. Then we set up a zero matrix, which is just going to be a temporary matrix, which has zero for the two elements".

Sonia: "Sigma is going to take the form of the two by two matrix. So this is the variance, covariance matrix, because it makes it easier to code, we've got a temporary one as well".

Sonia: "Fillmatrix. What this does is fills all the matrix of the temporary matrix with a value that you're specifying, so we can get a 2 by 2 matrix, with all four cells having this, which is going to be the variance, a really big number".

Sonia: "Then if you remember from your college days, the identity matrix , we are calling that sigma, so that gives you the diagonals of ones and zeros".

Sonia: "What this does is it multiplies the temporary matrix, so the ones that got this value for each of the cells with your identity matrix. And it's going to replace sigma zero with the result. So it gives you this basically. This is going to be your prior for the placebo. So the mean of zero, and that's your sigma. It's quite a lot of code to produce something quite simple".

Sonia: "The variance covariance matrices for placebo and active, will use this inverse Wishart distribution for the prior. So this again is noninformative but it is a little bit more tricky to find a noninformative for this distribution, but we need to use this inverse Wishart, because it makes the maths easier, it makes the posterior probabilities come out as normals, which is what we need".

Sonia: "There is a dependence between the variance and the correlation. So if the correlation is exaggerated, then the variance is going to be higher. So you need to watch out for this".

Plot of Data and Posterior

Sonia: "In this case, this is our posterior, and that is our data. There is quite a good match between the two of them, they line, which indicates that their prior is noninformative. But in some cases, if the variance is quite large, you might find that you need you need to look at the prior and change it, because it is not truly non‑informative".

SAS Codes

Sonia: "This is the joint code. I'm not going to go through every line of it, because it is quite long. I mentioned in the slide that we were fitting separate variance/ covariance matrix for the two different treatments. To do that, we had to split up the data sets so that we had a different variable for the placebo and the active. Basically, if the treatment is this then we'll create two different variables, so one for active and one for placebo rather than having them in the same variable, but just with a treatment variable to indicate which was which".

Sonia: "The other thing is, I am only modelling week 12, but if you do repeated measures, you need a wide structure data set, so with all the time point going along, whereas for something like proc mix you use a vertical structure".

Sonia: "We've got MCMC code here. As you can see, we've got all of the parameters we want to monitor. This is setting up all the different arrays. So here are the outcomes, so the data that we are modelling".

Sonia: "So, data 1 is your placebo data for FEV and CRP and data 2 is your active for FEV and CRP. We have got two baselines, so again, because we've got two end points we have 2 baselines, FEV and CRP. This is just a linear predictor to the baseline. I will show you where that comes in later".

Sonia: "We are going to calculate the treatment differences, so we need a parameter for treatment difference. So again, there are two. We need an array for this, whereas in the previous code we did not need these array. This is going to give us the probability for the different quadrants,  remember from that plot, we had probability for each quadrant".

Sonia: "This is going to be the LS mean for treatment and active and baseline covariate, sigma for placebo inactive".

Sonia: "This part here is setting up the priors. This is the code that I showed you before with the zero matrix and the identity matrix. This is setting up the prior for the sigmas".

Sonia: "We have got priors for sigma. We have got priors for the means. The parameters here you can put starting values, and I have found that sometimes these are quite important. If you leave it blank it will set the starting value of zero. If you put in a value, it will use the value that you give. If you give it a silly value, you can get some really crazy answers. It shouldn't really make a difference, but I found that it did made a difference. That's something to watch if you get numbers that don't seem very sensible".

Sonia: "We've got priors for the treatments, priors for the sigmas, and baseline. Then this is the model statement. We've got it in a DO Loop, because of the two end points. It's got baseline and then the treatment differences, the treatment for placebo and active. I have not put centre in here, because I was trying to keep it simple".

Sonia: "Then this is similar to what we had before. So placebo is going to be normally distributed around this, and then the same for active".

Sonia: "This is the data that is coming out, and then here are all the calculations that we want to see. So I have got treatment difference, and then here those probabilities. So the probability of both things being successful, both failing".

Sonia: "Underneath that we have just got a proc mix, just a sense check. That is really useful, because as I said before, you can have MCMC run, and it all looks like it's working but because you set up the model statement wrong you get crazy answers. So it's really good to have something just as a double check that you've got things roughly as you were expecting".

Diagnostic and Model Fit

Sonia: "It is really important to look at the diagnostics, For the study that we used this methodology for we got some really crazy diagnostics out. So it is really important to check them. I'll show you some examples in a few minutes".

Sonia: "It is also quite useful to plot the posterior distribution for the key parameters, so that you have got an idea of whether the prior is informative or not. And we reported the posterior median, standard deviation, and the credible intervals (which are like confidence intervals)".

Convergence Diagnostic 

Sonia: "One statistic that came out as a diagnostic, which is really useful, is the ratio of the MCMC to the standard deviation. This is basically showing you how much of the error is due to variability in the data, and how much is to do with the model simulation error. It should be ideally less than 0.01, although if that is really difficult to achieve, then 0.05 is reasonable".

Sonia: "To make it smaller, you increase your number of iterations and increase the burn in. So the burn in is how many iterations you want to discard at the start".

Sonia: "This here, this Geweke diagnostic compares the start of the chain with the end. So if you've got a large z-score indicates poor convergence, because it means that the start of the chain in the end of the chain are quite different".

Sonia: "Here's an example of really disastrous diagnostic. We've got a quite a small number of iterations here and no burn in. This as you can see, is bouncing along all over the place. and it looks a bit like a mess. The MCMC to SD ratio here is very high. Either you have not got enough runs or there is something really wrong in your model".

Sonia: "You can see that increasing the number of iterations tenfold is major and look a lot more respectable. It is not perfect, but we had something that looked like that and we thought it was passable. It depends on your data, how many data points you've got and how nice is looks".

Sonia: "Here it is better still. They've got 50,000. This has come right down to 0.02. The burn in on this one is not really very clear. Sometimes you get something really different happening at the start of the chain before it settles down into this pattern, in which case you would want to increase your burn in to get rid of those where it's sort of doing something funky at the start".

Sonia: "Then here, it looks really quite a good “hairy caterpillar”. Got very small ratio here and its autocorrelation drops down straightaway, this is a very healthy looking diagnostic.Having said that, you get this, it still doesn't mean your estimates make sense. You need to check those events and proc mix as well".

Outputs from Separate ANCOVA Models

Sonia: "Now, this is the outcome I did for the ANCOVA. You can see a treatment difference of 11 and then here, minus 0.06. We didn't get a brilliant match actually from the MCMC. It's quite different result, and we didn't have time to look at why that was but because we used the simulated data, I didn't worry about that".

Sonia: "One thing you might want to look at is what would happen if you change your priors. Would it change the outcome of the study? Beause if so, there something may be wrong with the priors".

Sonia: "We didn't really look at it in too much detail, because for the study, basically the results were disaster and it showed the study didn't work, so we didn't spend a lot of time. And the proc mix showed the same, so we didn't spend a lot of time worrying about the priors. That is something to be careful of if you are going to use the results".

Conclusions

Sonia: "I found this methodology really complicated to use, very time consuming, it's also really difficult to QC, because you don't get exact match, the QC and the production side- you are not going to get exactly the same numbers. So you have to go through manually and check whether you think the numbers are within an acceptable range".

Sonia: "But the plus side of it is you get this really nice way of interpreting the results, which is easier for a clinical team to understand and easier for them to make decisions. You need to be careful about choosing priors. We had to tweak the model quite a lot to get it to work, if you writing a RAP you need to leave a little bit of flexibility in the RAP to say different elements in the model are going be investigated so that you are not tied down too much".

Sonia: "Very, very time consuming. Some of these models took four or five hours to run. It's quite an undertaking from that point of view".

Sonia: That was all. Does anyone have any questions?

 Bayesian Study Design

 

At Quanticate our Statistical Consultants are experienced in clinical study designs and have delivered multiple trial analyses using Bayesian methods. For more information Submit a Request for Information and a member of our team will be in touch with you shortly. 

RELATED BLOGS

 

Subscribe to the Blog