0:09

Hi. This video is on sensitivity analysis.

The primary objective is to understand the concept of sensitivity analysis.

In addition, we will look at one particular way that you could go about

carrying out a sensitivity analysis in practice.

So we'll begin the discussion of sensitivity analysis with the idea of hidden bias.

So recall that when we're matching

the main goal is to achieve balance on observed covariates.

So these are the covariates that you selected ahead of time, that you decided,

that you needed to control for to achieve ignorability,

so the set of covariates that you believe are sufficient to control for confounding.

But these are all observed covariates.

So overt bias would occur if there was imbalance on these observed covariates.

In other words, if we did not fully control for these variables,

so we would have hope of identifying this kind of overt bias.

So after you match,

you can check balance,

and if you saw imbalance we could identify that and try to rectify it.

So that would be, overt bias would occur if you had imbalance,

you know, carried out your outcome analysis.

Anyway, so this is something that you could possibly identify.

However, matching does not,

is not guaranteed to result imbalance on variables that we did not match on.

So there's going to be some unobserved variables,

variables not captured in your data set,

or maybe variables that you chose to not match on,

and those could be imbalanced.

So this is different than randomized trials,

where randomized trials should achieve balance both on observed and unobserved variables,

because it's truly randomizing flipping

a coin so that treatment assignment shouldn't depend on anything.

So, in a randomized trial you should have balance on all the variables.

In a matched analysis we really can only expect to achieve balance on observed variables.

So hidden bias would occur if there is imbalance on

unobserved variables and these unobserved variables are actually confounders.

So these variables are important.

We don't have balance on them because we didn't match on them,

but we can't really, we can't see that, right,

because we don't have those variables.

So this would be known as hidden bias and this would be

a situation where the ignorability assumption is violated.

You could also think of this as that there's unmeasured confounding in your analysis.

So main idea of sensitivity analysis?

So what we want to think about this idea of,

if it is hidden bias,

how severe would it have to be before conclusions changed?

So what do we mean by conclusions changing?

Well, for example, if you have a statistically significant result,

where we assume there's no hidden bias, well,

how much hidden bias would they have to be before we would have,

we would not have a significant result?

Another example would be a change in the direction of effect.

So maybe we see a positive treatment effect.

Well, how much bias would there have to be before design actually

changed so where we would have a different direction of the treatment effect.

So this is a very useful kind of concept because we typically

believe that there is likely to be some degree of unmeasured confounding.

Hopefully, a small amount of unmeasured confounding,

but there, it's likely that there's some,

and so we want to know, well,

are the conclusions we're making sensitive to

just minor violations of our key assumption or is it very sensitive to violations.

In order to discuss sensitivity analysis in more detail,

it will be useful to introduce some notation.

So let πj be the probability that person j receives the treatment.

Similarly, πk is the probability that person k receives the treatment.

So π here is a probability and then it's indexed by a particular person identifier.

So let's imagine that person j and person k are

perfectly matched on their observed covariates,

in other words, Xj is equal to Xk.

So the set of covariates for person j are exactly the same as person k. So then,

if πj = πk,

then there's no hidden bias.

So what we're saying here is that we have a matched pair,

we've matched person j and k. Their observed covariates agree.

Now if the probability of receiving treatment is actually the same,

then there couldn't be any hidden bias,

there couldn't be any variables that we're not capturing,

that are affecting the treatment decision.

So there's no sort of unobserved variables that are affecting the,

sort of, treatment assignment probability.

To think further about sensitivity analysis let's consider the following inequality.

Again, πj is a probability that person j had receive that

treatment and πk is a probability of treatment for person

k. And we'll assume that we've matched person j to person k on,

on their observed covariates.

So person j and person k have the same observed covariates.

What we see then here is that in the numerator of this inequality,

we have the odds of treatment for person j.

Recall that in odds it's just a probability divided by one minus the probability.

So the numerator is just as a treatment for Person j.

In the denominator we have the odds of treatment for person k. And so,

then this ratio was just an odds ratio.

So Γ here is an odds ratio,

and then if this odds ratio is equal to one ( Γ=1),

then there's no overt bias.

So this is what we said on the previous slide,

if πj and πk are equal,

there's no overt bias.

If they're equal, the odds ratio would have to be one,

so that would imply no overt bias.

However, if this Γ>1,

we would have hidden bias.

So this would mean that person j was more likely to

receive treatment than person k. So even though they have the same observed covariates,

there's something about person k that made them more likely to receive treatment.

So you could think of that being the case

because of hidden variables, unobserved variables.

So Γ then can quantify the degree to which our assumption of no hidden bias is violated.

So if Γ=1 our assumption is fine,

if Γ is very close to one,

then our assumption is barely violated.

If Γ is much larger than one,

then our assumption is very violated.

So now we'll discuss how this idea of Γ representing hidden bias can be used in practice.

So, as an example, suppose that we have evidence of a treatment effect.

So, in other words, you carried out the kind of

analysis that we've discussed in previous videos where you matched,

make sure you have good balance,

then you carry out an outcome analysis and

imagine that you found evidence of a treatment effect.

Well, that evidence of a treatment effect was

under the assumption that there's no hidden bias.

In other words that, this Γ,

this adds ratio is equal to one.

So that would typically be your starting point as your primary analysis is to

assume that the assumption is of no unmeasured confounding or no hidden biases met.

Next, what we can do is then carry out the sensitivity analysis part.

And so in this case what we would do is we would increase Γ,

this odds ratio, until the evidence of a treatment effect went away.

In other words that, it's no longer statistically significant.

So how you would actually sort of redo the analysis of

different values of gamma is really beyond the scope of this video,

you know that the actual ways in which you would do it.

But there is an R package that can do this,

there's a couple of them.

So I mentioned them down here and also there's a lot of details on

this in the Design of Observational Studies book by Paul Rosenbaum,

which can give more details.

So what we're shooting for here is just the main concept.

Okay, so what we would do then is gradually increase Γ,

and each time, for each Γ that you choose you will reanalyze the data,

assuming that that Γ is correct.

And then what would happen is if,

you find out you don't have to increase Γ very much before you change your conclusions.

Then we would say that inference is very sensitive to unmeasured confounding.

So I'll give an example of,

let's say, you know,

you start with Γ=1 where there is no hidden bias and you've,

let's say you've increased it to 1.1 and then we,

we've changed our conclusions.

So 1.1 is a fairly small odds ratio,

so that, what we would say then is

that inference is very sensitive to unmeasured confounding in that case.

So the probability of treatment for one subject relative to

another doesn't have to differ by very much in

a matched pair before our conclusion is changed.

So that would be very sensitive.

Whereas, if you don't see a difference in conclusions until, let's say,

this odds ratio this Γ is five,

then we would say it's not very sensitive to hidden bias.

So an odds ratio of five is quite large.

So, if you're comparing two people in

the same matched pair and you say that

one person's odds of treatment is five times higher than another,

that's much higher, right?

So, and if it needed to be that large,

if hidden bias needed to be that large before we change our conclusion,

we would say that it's not,

our conclusions are not very sensitive to hidden bias.

So we would be more confident in our conclusions in that case.