A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

Loading...

From the course by Johns Hopkins University

Statistical Reasoning for Public Health 1: Estimation, Inference, & Interpretation

180 ratings

Johns Hopkins University

180 ratings

A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

From the lesson

Module 3B: Sampling Variability and Confidence Intervals

The concepts from the previous module (3A) will be extended create 95% CIs for group comparison measures (mean differences, risk differences, etc..) based on the results from a single study.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

So, in this set of lectures we'll

look at estimating confidence intervals for binary comparisons.

And I'm calling this part one, because

we're going to focus on difference in proportions.

Attributable risk or risk difference as it's also called.

And in the next section we'll focus on the

ratio based comparisons, relative risk and the odds ratio.

So upon completion of this lecture section you will be to estimate and interpret

a 95% confidence interval for a

difference in proportions between two independent populations.

And, unlike the continuous situation where we look at paired

and unpaired studies, we're only going to focus on unpaired studies.

There is a pair type of study designed for binary outcomes, but

it's rarely used, so we will not consider it in this course.

So let's look at our first example. You recall

our response to therapy in a random sample of 1,000 HIV-positive patients.

From a citywide clinical population the overall response in this group was 206

responders out of the 1000 for an overall response percentage of 21%.

So as you may recall when we broke these groups

out by their CD4 count status at the start of therapy.

And look to the proportion who responded the proportion

in this sample of 503 persons who's CD4 count with less

than 250 at the start of therapy the proportion respondent was

25% versus 16% who responded in the group who's CD4 counts

were greater or equal to 250 at the start of therapy.

So again, one of the summary measures we computed was the

difference in proportions or the risk difference, or the attributable risk.

And that was if we compare it in the

direction of the lower CD4 count to the higher CD4

count taking the difference in that order, the 25% minus

the 16% gives an absolute difference of proportions of 9%.

So, in other words a 9% greater response is estimated from these data.

A greater response to therapy in the CD4 count group

the, in the group that had CD4 counts of less

than 250 at the start of therapy as compared to the group that

had CD4 counts of greater than equal to 250 at the start of therapy.

So how are we going to get a confidence interval for this?

Well we need to figure out how to estimate the standard error.

And I'm going to just let it give you the answer to that.

Statisticians have figured this out.

And it will look very much analogous to what we did

to estimate the standard error for a mean difference from two samples.

A different in sample mean.

So, I am just going to write out the formula and then we'll talk about.

So, the estimated standard error based on two samples of data

to estimate the difference in sample proportions is given by this formula

here. It's square root of the contribution

from the first sample, p hat 1 times 1 minus 1 p1 hat,

all divided by n1 plus p

portion contributed from the second sample, p2 hat times 1 minus p2 hat

over the size of the second sample, n2. Indulge me for a minute.

Let me rewrite this in a way that it's not very efficient but it will,

we'll just illustrate something we saw with

the difference in sample mean before as well.

I could rewrite this is the square root of P1 hat times 1 minus P1

hat, all over n1 squared. That seems redundant.

Take the square root and square it to get

back the unrooted piece.

We could do this for the second one as well.

Put it in the square root, p2 hat, 1

minus p2 hat, all over the second sample, squared.

But if you notice this just really here shows that we have

the square, standard error of the first sample proportion p 1 hat.

And it's squared.

And this piece here is the uncertainty that

comes from sampling variability for the second sample proportion.

P2 hat squared.

So just like we saw with the difference

in sample means, the difference in sample proportions,

the standard errors has added a function of

the uncertainty in each of those two samples.

Now let's apply this to our data and actually

use the information we have here.

We actually have the sample proportions and the

sample sizes, so let's estimate the standard error.

For the difference of proportions between the two samples, based on

the size of the samples we have here and the resulting proportions.

So if we were to do that, the standard error estimate for this difference,

CD4 [INAUDIBLE] list and 250 minus proportion

who were, responded in the group with starting

CD4 counts of greater than or equal to 250.

We just plug in this formula there were 503 persons in the

group with CD4 counts less than 250 at the start of therapy.

So we take that observed proportion times 1 minus to

the proportion who didn't respond is 1 minus .25 or .75.

And we divide that by 503. And we do the same thing for the

second group, 16% responded, ergo 84% did not.

We take that product and divide by the sample size, which is 497.

And if you do this, well, if you're interested, you can verify it.

It turns out to be about 0.025.

So the standard error of the difference in these two

sample proportions based on our results and the sample size.

This is about 2.5%.

So in

order to get a 95% confidence interval for the true

population level difference in proportions responding in the population from

which these samples were taken, we take our observed difference

of 0.09 or 9% and subtract to estimate the standard errors.

.09 plus or minus .05 gives a confidence interval

from .04 to .14 or 4% to 14%.

So, how can we interpret this confidence interval?

Coupled with this result, well, we could say, well, we have a 9% estimated

greater response on the absolute scale of therapy in CD4 group who is less

than 250 at the start of the study, start of the therapy as compared

to the group that had CD4 counts of greater than or equal to 250.

After counting for the sampling variability because again

these results are based on an imperfect sub sample.

The larger population city wide population we only

have a 1000 observations total from that population.

After counting for the sampling variability this

response could be between 4% and 14% greater.

For those with lower CD4 counts than those

with higher CD4 counts in the entire population.

Lets go look at our maternal infant HIV transmission study

that's been so seminal in this class and in public health.

So recall here are the results from the study placed in 2 by 2 format.

And we now know that the proportion of infants born to mothers who are

HIV positive and given AZT, the proportion of infants who contracted HIV within

18 months after birth was 7% compared to the 22%

who contracted HIV born to mothers given a placebo.

So our risk difference or an absolute difference of 15, negative 15% indicating

absolute reduction of 15% in the risk of

HIV transmission for mothers given AZT compared to those given placebo.

So how are we going to estimate a confidence interval for this?

We'll we're going to do it exactly the same way.

We're going to take our observed difference.

[BLANK_AUDIO]

And add and subtract 2 estimated standard

errors of that observed difference in proportions.

[BLANK_AUDIO]

And we're going to estimate its standard error.

Just the same way we did with the last example.

So the standard error, estimate the

standard error of this difference in proportions.

[BLANK_AUDIO]

Equals the square root of the .07% we saw who

contracted HIV with an 18 month after birth

times the 93% we didn't. In decimal form of course.

Divided by the 180 infants in this group plus the 22% we saw in

tracking HIV times the 78% who did not, born in the, 183 infants

were born to mothers not given AZT. When the dust settles,

this is about equal to 0.036 or 3.6%. We do 95%

confidence interval. That negative 0.15, negative

15% we observed plus or minus 2 times 0.36.

When the dust settles, we get something

between negative 0.078 and negative

[BLANK_AUDIO]

22.222 or negative

7.8%, negative 22.2%.

It's just a coincidence that the difference in these confidence or inter

points was very similar to the

two proportioned values for the individual groups.

So how are we going to

interpret this and add in the confidence interval or uncertainty point?

We could say the proportion of infants who tested

positive for HIV within 18 months of birth was 7%.

And we could give the confidence interval, just another way to

describe this, in the AZT group and 22% in the placebo group.

This is an absolute decrease of 15% associated with AZT.

The study results estimate

that the absolute decrease in the proportion of HIV positive infants born to

HIV positive mothers associated with AZT. To be as low as 8% and as high as 22%.

So, this puts uncertainty bounce on this.

Another way to think about this, we could translate into the number

of cases of transmissions prevented potentially by giving mothers AV, AZT and

we could take a fixed number.

Could be 1,000, 10,000, any number you want

but lets we gave, we could use the 1,000.

Suppose we treated 1,000 mothers with AZT, we'd expect to see

a 150 fewer, that's the 15% reduction, transmissions.

Then if we're not giving them AZT. But after counting

for the uncertainty in our estimated efficacy, this could be anywhere from 80

fewer transmissions to 220 pure

transmissions. That's sort of putting the confidence

interval into a substantive context by relating it to the potential

benefit to a fixed number of women with HIV who are pregnant.

Let's look at this study again, aspirin and cardiovascular disease.

This large randomized trial.

Where they randomized healthy women of 45 years of

age or older to receive 100 milligrams of aspirin on

alternate days, or in placebo, and they were followed

them for ten years for our first major cardiovascular event.

And what they found in the end was that the proportion of women who experienced

a cardiovascular event within ten years Who were given aspirin was on the order

of 2.4% compared with 2.6% among women who were given the

placebo. And so what saw here is that this was a

risk difference of in decimal form .024 minus 0.026, or

negative 0.002, or negative 0.2% in percentage form.

What we saw here was a 0.2% absolute reduction in the ten-year risk of CDD for

women on low dose aspirin therapy compared to women not on low dose aspirin therapy.

And that's what we've discussed.

In a population of a hundred thousand women, we would expect to see roughly 200

fewer cases of cardiovascular disease developing within 10 years where the women

were given low-dose aspirin therapy. So, let's bring in the uncertainty?

I'm just going to cut to the chase, now that we've looked at

computing some of this by hand, If you actually do a confidence

interval for this it ranges from negative 0.005 to positive

[BLANK_AUDIO]

0.0008 So notice this confidence interval

includes 0 so this result was not conclusive.

It's not what we call statistically significant.

And again, we'll get into statistical significance more defined in Lecture nine.

But in the end, they found no association between lotus aspirin therapy and

cardiovascular outcomes.

So, one way to corporate the uncertainty and the confidence interval

into a substantive interpretation would be to say something like, in a

population of 100,000 women, we would expect to see 200 fewer cases of

cardiovascular disease if the women were given low dose aspirin therapy.

But at the population level, at the actual level.

So let me just,

let me just, to make this clearer let me just say, in a group of

100,000 women we would expect to see 200 fewer

cases of cardiovascular disease developing within ten years, if the women were given.

Low dosed aspirin therapy.

But at the population level or after accounting for the

uncertainty in our est, estimates the association could range from

500 fewer cases to 80 more cases were the women

given low dose aspirin therapy relative to were they not.

So as such after accounting for

sampling variability there's no population level association.

Between low dose aspirin therapy and cardiovascular disease.

After accounting for the uncertainty we include

no association, or zero, as a possibility.

Finally, let's look at the hormone

replacement therapy and risk of coronary heart

disease study that was famously done the trial that was famously stopped early.

In the early 2000's.

And here are the results and what they found

in this large study was they were looking at

the proportion of cardio, [UNKNOWN] women developing coronary heart

disease in the group that was randomized to receive hormone

replacement therapy versus the group that got the placebo.

And they found, the researchers found in the group that

was randomized to receive hormone replacement therapy

there were a proportion of coronary heart disease

developing in the follow up period as 0.19 or .9%.

Compare to a proportion of .015 or 1.5% in the placebo group.

This resulting risk difference was .019 well that's 0.15 or .004.

So the risk difference was .004 or .4%.

[BLANK_AUDIO]

And if you actually did the 95% confidence interval,

you get a 95% confidence interval, 0.00016

to point 008 and just for good practice session put the starting zero too.

So let's just interpret this.

Let's get let's just jump right into the context of excess

cardiovascular heart disease events Expected where

women given hormone replacement therapy versus none.

This is one of the main reasons the trial was stopped.

But the estimated effect, if we looked at 100,000 women.

Just using that as, we could use any number here to estimate the magnitude.

But if we looked at 100,000 women, we'd expect Roughly 400 excess cases

of coronary heart disease where the woman gives

hormone replacement therapy versus not.

But accounting for the uncertainty in your estimate, this excess could

be anywhere from 16 additional cases, this is just taking the 0.00016.

Absolute difference of proportions and, and prorating it to a group of 100,000.

So, this excess could be anywhere from 16 to 800 excess cases.

So, a lot of uncertainty here, even though the trial was large, but Their

reasoning, or part of the reasoning for stopping the trial was that even 16 cases

per 100,000 women excess could have a substantial effect on the

entire population of women who would

potentially enroll in hormone replacement therapy.

So in summary, computing confidence intervals

for risk differences comparing two unpaired populations

is very similar to computing confidence intervals

for mean differences comparing two unpaired populations.

The resulting confidence interval gives a range of

possible values for the risk difference, the attributable risk.

Difference to proportions between the two populations

from which the two samples being compared

are taken.

And we saw with randomized studies, especially,

this resulting confidence interval can estimate a range

for the absolute impact of an intervention or treatment on a group of known size.

And we showed that with the aspirin study, the HIV infant transmission study

and the hormone replacement therapy study. In the next section, we'll deal with

the same comparisons but on the ratio scale looking at

it accounting for uncertainty in relative risk and odds ratios.

Coursera provides universal access to the world’s best education,
partnering with top universities and organizations to offer courses online.