Okay, so the question is how I got into research synthesis.

And there are two things that brought me into this field.

The first is that I used to work in a hospital,

and I was involved in the clinical trials.

And I realized, at some point,

that most of what we were doing was silly in the sense that basically,

we were trying to see if P-values were statistically significant or not.

And we would make decisions about drugs and about

treatments on the basis of whether or not the P-value is statistically significant.

And in fact, that really doesn't tell us very much at all.

If the P-value was statistically significant,

then the only thing that we really know is that

the treatment has some effect greater than zero.

Well, we don't really know whether that effect is of

any substantive importance or clinical importance or none.

In meta-analysis, people were focusing on

the signs of the effect which is really what we care about.

When we ask if something is effective,

what we really have in mind is whether or not has the clinically important effect.

And meta-analysis, by focusing on the size of the effect,

actually addresses the thing that we care about.

And that we thought we were addressing with P-values but in fact, we weren't.

I'm very concerned about the fact that people misunderstand statistics for heterogeneity,

because when people ask whether or not the effects are heterogeneous,

what they really have in mind is how much does the effect size vary from study to study.

Is it the case that the treatment has pretty much the same effect in all populations,

or is that the effect that various moderately,

or is it the case that the size of

the treatment effect varies where it has only a trivial impact in some cases,

and a moderate impact in other cases,

and a major impact in other cases?

People seem to think that that's addressed by test of significant with heterogeneity,

or that is addressed by i-square.

And in fact, it is not i-square,

it does not tell you how much the effect size varies.

That's a very common mistake,

but it's a mistake nevertheless.

And there are statistics that will tell you that.

For example, one of them is the prediction interval that you can

actually say that in some populations,

the effect is trivial, and in some,

it's moderate, and in some, it's substantial.

But we have to make sure that we use the right statistics and interpret them correctly.