Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

来自 Johns Hopkins University 的课程

Mathematical Biostatistics Boot Camp 2

41 个评分

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

从本节课中

Techniques

This module is a bit of a hodge podge of important techniques. It includes methods for discrete matched pairs data as well as some classical non-parametric methods.

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Hi, my name is Brian Caffo, and

this is Mathematical Biostatistics Boot Camp, lecture 12.

We're going to be talking about nonparametric statistics.

Okay, in this video, we're going to be

talking about nonparametric tests, which include the sign

test, which is useful for paired tests,

which is kind of related to McNamar's test.

Signed rank test which is also for paired data.

we'll talk about Monte Carlo versions of that test.

We'll talk

about independent group tests, and then from

that the Mann/Whitney test, and Monte Carlo

versions of that, and then their relationship

to permutation tests, which are very closely related.

Okay, so very briefly we're going to be

talking about here about so-called non-parametric tests,

but very kind of classical non-parametric tests,

and these are often called distribution free.

That of course doesn't mean that they're assumption free.

They, they involve assumptions, for example, sampling assumptions such as

[UNKNOWN].

but they require fewer assumptions than parametric methods.

they have a tendency to focus a little bit more on testing rather than estimation.

but, which maybe is a problem.

But there are estimation techniques that follow.

though they also tend not to be very sensitive to outlying objects.

[UNKNOWN]

and they're especially useful for data like ranks, if

the data actually come in the form of ranks.

Because they often involve transforming data to ranks.

the, they're not uniformly wonderful, because they do

throw out some information, which is their, their problem.

so then, and, and, because of that, they may wind up being less

powerful than their parametric counterparts when

the parametric assumptions are true of course.

and then, you know, for larger sample sizes, they become about as efficient

as their parametric counterparts, so they, they

are, are, are pretty, pretty good tests.

So here's some data from this wonderful book

from Rice called, Mathematical Statistics and Data Analysis.

I highly recommend this book.

I use the second addition, I don't know if that's what they're still on.

Anyway, this, this data concerned 25 fish.

They were taking mercury levels from the fish in parts per million.

And they had them at two locations on each fish, so, fish

one, they had a measurement of 0.32 and 0.39, and so on.

Okay, and then

just for reference here, I've added the difference between those

two measurements, subtracting the P from the SR, one each time.

And then I'll show the ranking and the signed ranks, I'll explain those

in a minute, but this is the data we're going to use as motivating data.

Here we want to test whether the mercury levels taking, taken at

location P differ from those of, of location SR where here

you know we're, we're, we're taking two measurements per fish.

Trying to control for the fish to fish variability by

taking both measurements on, on the same, on each fish.

Having each fish serving, it's, it's own control.

But we're going to be talking about non

parametric tests, so what we're interested in or

concerned about Is the validity of the assumptions

that go into typical tests, such as normality.

Okay, so let's let Di be the difference for each fish.

In this case, I'm subtracting P minus SR, and then let's let theta

be the population median of the difference, of the, of the differences Di.

And we want to test whether that median is zero versus the median being non-zero.

Now, now by the definition of the median, if theta equals 0, only if the probability

of a difference being greater than that value 0 is exactly 0.5, right?

That's the definition of a median.

Same thing for less than zero. Theorem.

and, so, so, as a test statistic why don't we just

count the number of times that d is bigger than zero.

And if it's excessively large or excessively small then that's going to

to dispute the idea, that, that the median is exactly zero.

so just as an example,

if all the differences were positive. Then zero couldn't be the median.

because you wouldn't expect a large sample, where

every single measurement was larger than the population median.

Okay.

Anyway then X here, each difference, if we're assuming the fish pairs are IID.

then each difference is in, is a, is a coin

flip, with a 50% chance of being above the median.

Or a 50% percent chance of being below the median.

So the

number of times it's larger than 0 is binomial.

and in P, and in this case, P is 0.5.

So our sign test just tests whether P is 0.5 using this data

X, and you can do an exact binomial test, like we've talked about before.

Okay, so let's go back to our example.

So theta is the median difference between, of P minus SR, our null

hypothesis is that theta eqauls zero, versus

the alternative that it's different from zero.

The number of instances where the difference

is bigger than zero, in this case

go back to the table, or so that would be 15 out of 25 fish.

And the binomial test then is the question of whether 15 is large 15

positive instances out of 25 is, is large. Now, our expected number out

of 25 is 12.5, so you know, 15, we don't know whether 15 is, is excessively large.

Well, in this case, it turns out that, no, 15 is not excessively large.

it has a, you know?

A .42% chance of happening under the null

hypothesis for a two sided test, in this case.

you know? And, again, you know?

We could have used a large sample test.

I don't know why, because we can do an exact test in this case.

But we could have used a large sample test.

And that's prob.test.

And you get a chi-squared statistic then

of 0.64, a p-value that's quite similar, 0.42.

a, a, any rate, the idea is that this is simply just

testing are, if you're going to assume that levels of one are higher

than the other, then you're just going to count the

number of pairs from the matched pairs, where it's higher.

And is that excessively large relative to a coin flip for each pair?

So that's the sign test.

And you might be wondering what's wrong with this,

so let's discuss some, some potential problems with these tests.

Because it doesn't use very many assumptions, right?

It doesn't, we aren't using very many assumptions.

We're using that the Fisch pairs are IID,

and that's about it.

Right, that's the only assumption that we're using.

but let's talk about what, what may be some of the problems.

Okay, so the biggest problem is of course that the magnitude of differences

is discarded, so it's potentially not as powerful as you would hope, right?

You know, it'd be different if all, you know, maybe if you only got half of

them as being positive, but all the ones

that were Positive, or much larger differences, and all

the ones that were negative were really small differences, then that would be

different than if they were kind of spread equally between, above and below 0.

so I would say that.

But then the other thing I would mention is there's nothing specific about 0.

You could have tested any mean, theta equal to

theta 0 By calculating the number of times the

difference is bigger than any specific value, testing whether

that the median is bigger than any specific value.

What's interesting about that, then, you know, we won't talk about this, but it

is kind of interesting, right, that you can do this for any value of theta.

So that means you can find the values of theta for which you

fail to reject, and the values of the theta for which you reject.

And then of course, if you can do

that, you know, by a grid searcher, say, something

like that, then you can invert that and

get a a confidence interval for, for the median.

And so this

is kind of an interesting way, a very highly non-parametric way to

get a confidence interval for the median of a set of data.