Hi, this is Kaye Dickerson. We're beginning section D, Bias in the Analysis. Bias in the Analysis is the third of the types of meta bias that we're confronted with when we do a systematic review and meta analysis. So you may see in the literature and here about a concern. With the type of statistical model that's used for the meta analysis. The two typical methods that are used are random effects and fix effects models. Now can this choice of model influence your overall estimate? Yes, it can in certain instances and I'm going to show you a result, where it does. But, I think the thinking now, is that let's not get too hung up on random versus fixed effects. That most people start out their careers in systematic reviews thinking that the random effects model makes much more sense. And over time, I've heard a lot of people say that they changed their preferences, for which model they'll use. So I think the bottom line there is that, there's no right or wrong here necessarily, although you might want to consider different models for different situations. Try to understand what the models are, so that you can gauge in conversations about them, but we aren't going to come down hard and fast saying you should use this model or that one. So, just on the question of whether the choice of model can affect your estimate. There was a study, as I mentioned, by Jose Villar. It was published in Statistics in Medicine. And they looked at 84 Cochrane meta-analyses from the Cochrane Pregnancy and Childbirth Group. And they compared the meta-analyses that use the fixed and random effects model, by whether they had a significant amount of statistical heterogeneity or not. Let's look at a slide that goes through what they found. It's a little bit confusing so let's take it slowly. What Jose Villar and his colleagues showed is that sometimes, there is a difference between using a random effects and a fixed effects model, and it can influence an overall estimate. While I and others, nowadays 13 years later, tend to minimize this as an important choice. That is which model you use. They showed something that gives us pause and makes us think about which model we want to use. It's worth, thinking about, that's for sure. So here what we see are two, two by two tables. The top one, the meta analyses were those that had significant statistical heterogeneity, amongst the studies that were included. And there were 21 meta-anaylses that had statistical heterogeneity. In number two, there were 63 meta-anaylses that did not have statistical heterogeneity. That is, we're more confident that these studies tend to have results that are similar. Now let's just look for a minute at the 21 meta-analyses with statistical heterogeneity. What they did is, they did the analysis in these 21 meta-analyses two different ways. First, they used the fixed effects model, and then second, they did the analysis using the random effects model. If you look along the top, you'll see, using the fixed effects model did they get statistically significant results? That is, did the results exclude unity? So, yes means statistically significant. And with the random effects model, excluding unity also means something like, statistical significance. They're saying it very carefully. So when we have statistically significant results, regardless of whether we use the random effects or fixed effects model. That means they found pretty much the same thing in terms of whether they were significant. Whether you use the fixed effects model or the random effects model, you did not exclude unity. That is, it was not statistically significant. So with the 21 meta-analyses with statistical heterogeneity, only 5 times out of the 21 meta-analyses is there a discordance, when one uses the fixed effects versus the random effects model. That's 5 out of 21, or about 25% of the time. When we look at the 63 meta- analyses without statistical heterogeneity. We see that 4 out of 63 times, there are discordant cells, that's about six percent of the time. A much lower proportion of the time, than when there statistical heterogeneity. And so the take home message here is that when you have statistical heterogeneity, you may want to use the random effects model. It will give you a more conservative answer. That is, it may not exclude unity compared to if you had use a fix effects model when you might have gotten a statistically significant result that you didn't get with the random effects model. When you don't have statistical heterogeneity in your meta-analysis, then probably it matters very little which model you choose. So a random effects model doesn't fix the heterogeneity problem but because it makes those confidence intervals wider and excludes unity less often, it may be a more conservative approach when you have statistical heterogeneity in your meta analysis. So it turns out that meta-bias is not always analyzed in the systematic reviews and meta-analyses that are in the literature, and sometimes, this is related to bias in the analysis, and sometimes, it's related to selection bias and information bias. This is an example of examination of systematic reviews in urology. This is an example of systematic reviews assessment for pediatric oncology. And, this is a blowup of a systematic review in pediatric oncology. We often, you'll see the terms transparency, which are, can we tell what was done in a systematic review? We often, as you will find out or have found out in this course, find that you can't tell what was done in the individual studies that are Included in our systematic reviews. Well, it turns out we often can't tell what was done in the systematic reviews and meta-analyses themselves. So, this is a table, a figure that comes from that urologic systematic review quality paper that I showed a few slides ago. And what you can see is they found, looking at 57 systematic reviews in urology, that about 40% did duplicate screening and data extraction, .close to 50% did a comprehensive literature search. Only about 30% published their legibility criteria. In terms of analysis methods about 60% published what was done. Only looks like 18% or so, assessed whether there was a likelihood of publication bias in the way they search for relevant studies. And so you can see that the quality of systematic reviews is not only not high, but the chances of meta- bias are very high indeed. So much of what I've been showing you is related to clinical trials which is a problem we have as you know through out this course is a lot of the studies of how to best do a systematic review are done on clinical trial information. But we do have some information about systematic reviews and prognostic questions. They weren't any better at defining outcomes, at looking at confounders, at describing the analysis at doing an appropriate analysis and so forth. And this is just a quick snapshot to reassure you that even though we're looking, for the most part, at studies that were done examining clinical trials. When people have looked at observational studies, addressing other types of questions, and intervention effectiveness. We're finding the same thing. That basically, the vast majority of systematic reviews or let's just say, the majority, similar to the majority of individual studies are not well done, and we need to improve. Part of that is the education you're getting in this class, and part of is that journal editors and reviewers need to know a little bit more before they review and pass on these articles for publication. So let's talk for a minute about a situation that comes up now and then. That a number of people are concerned about. I'm less concerned about in the majority of cases. But I know that It does bother people that sometimes systematic reviews on the same topics don't agree. They contain a different number of studies, they used different methods of analysis, or their searches were different. And what does this mean? How could there be two answers to the same question? The majority of time, it's the scientific process. It is a slightly different question that's being asked. And if the systematic review is transparently reported then the reader can go back and see whether they might prefer that it's done a different way, and either add studies or redo it so that the question they're interested in is addressed and so forth. But this example on this slide was one that I found quite intriguing because it shows a number of things. So on the left side. What we have is a meta- analysis done by the Cochrane group on the efficacy for H pylori eradication in non-ulcer dyspepsia. On the right side we have the same general topic that is the eradication of H pylorean non ulcer dyspepsia. And this systematic review was done by the AHRQ evidence based practice center. So, look at these two meta analyses, they're very different. We're just looking at them generally. The Cochrane review has many more studies than the EPC review. What's the overlap? Looks to me like in general, the same studies are there, but the EPC review includes data and pathos and they aren't included over on the Cochrane side. So what's going on? If you look at the bottom line, the Cochrane review favors eradication of H pylori. Well, the EPC review also favors it but it's not statistically significant. So in fact the proper interpretation is It does not favor. There's no evidence to favor H.pylori. It doesn't disfavor it. It's just no evidence favoring H.pylori eradication. So what's going on here? Well it turns out that the dates of the search for trials that are included in the two systematic reviews are different. The Cochrane search terminated in May of 2000, while the Annals of Internal Medicine, the EPC review, curtailed the search in December of 99, about six months earlier. That later search resulted in three more trials, abstracts and a paper was how they reported. Now the abstracts didn't always report the outcomes just the way that the systematic reviewers were looking for them. As a matter of fact, one of the abstracts in the Annals, or EPC review had a different outcome or endpoint reported than the one that the EPC's designated as the one they were interested in for the systematic review. When this happened for the Cochrane review, they contacted the author and obtained data for the exact outcome that they were interested in. When it happened for the EPC reviewers, they decided that the outcome, that was reported, was close enough to the one they were looking for and used the data there. And this made a big difference. So, it turns out there were a number of things going on. The searches were different. Studies that were excluded and included were different, and this endpoint also made a difference. That the Cochrane authors called the authors of the individual study where the endpoint was not reported in the form that the Cochrane authors wanted it, and they got the actual data that were unpublished. Bottom line, reviews can differ in their findings in a legitimate way, can be perfectly legitimate. One isn't right, one isn't wrong. Part of it is the difference in the scientific method, and you just have to read them very carefully to see how it matters to the question you are asking as a reader. But also there can be a difference in methods used that make a difference, in this case, the search being done closer to the time of publication, and finding out what the results for that outcome really were was important to the findings in the review. So now I finished up talking about biases in systematic reviews and meta-analysis, or meta-bias, and I'm going to finish up by talking about reporting transparently.