So traditional brain mapping approaches treat the brain as the outcome,
and the task or behavioral variables as predictors at each voxel.
So they test at each voxel whether there's some non-zero effect, but
they can't actually tell that effect is.
It's just a hypothesis test.
And the brain is always the outcome.
Well, this means that every voxel is a separate,
independent outcome in a mass univariate analysis.
There's no models, or no accounting for
interactions between or even joint additive effects of various,
different brain regions in representing or predicting outcomes.
So this is a major major limitation.
And often what we like to do is to use multiple boxes to explain an outcome,
because we know that regions work together.
So that's what multi box pattern analysis does.
It assesses whether a pattern across boxes predicts a behavior or outcome.
So, MVPA is multivariate in brain space, unlike the standard Mapping analysis.
And it's often but not always univariate in the task or
behavior outcome space, whatever you're trying to predict.
So there are some exceptions, but most analyses are univariate in outcome space.
We're trying to predict one outcome.
And this kind of analysis, I would argue,
is better suited for mapping brain structure to function.
Why? Because we can develop
models of interactions among brain regions that characterize or
predict a category of mental events.
It turns out that I don't know any single voxel in the brain that's really sensitive
and specific to any particular category of mental events.
So that approach is really limited.
And this one is better suited to this mapping procedure.
So there are two main types of multivoxel pattern analysis.
And the first type we'll call local multivariate analysis.
So this means identification of patterns in one or more local regions,
like a region of interest.
And then looking at that region and asking does V1 predict the outcome,
does V2 predict the outcome, and this also includes
so-called searchlight analyses that take a local region and a moving window.
And they just run this predictive analysis everywhere in the brain like that.
The second type is what I'll call an integrated model.
So in this case, we're going to use voxels across the whole brain.
Any available information that I think is neuroscientifically valid and
sometimes even multimodal data.
Like combining functional data with structural data fMRI, EEG, PET, or
other kinds of data into a single predictive model.
So this is really suited for asking,
how well can I predict this outcome using the available brain information?
And then I can refine the models, simplify the model and
ultimately work with that as a basis for a representation of the underlying process.
So why we run these MBPA, apart from the reasons I've already told you.
One is to identify these patterns that are optimized to be predicted of external
outcomes.
That could be experience, performance Clinical status, etcetera.
And then even if I don't know exactly what FMRI activity means in terms of
neuro activation of inhibition, or where exactly the signal is coming from always.
If I can predict an external outcome,
essentially my brain results have their own internal logic and validity.
And it becomes it's own viable level of analysis, for predicting performance.
A second reason is to better characterise the mappings between mind and
brain for the reasons I said before.
Because really I don't know of any single voxels that really encode categories of
mental events, and can really be used as indicators for them.
So this MVPA is especially useful for capturing structure at finer
spatial scales than typical voxel-wise analysis, we'll look at that.
And it's also really useful for capturing brain representations that are distributed
across voxels, or regions, or even networks that span multiple brain regions.
So let's look at the issue of spatial scale.
And as I said in the previous course, there's information at multiple spatial
scales going from large-scale networks, which have differences across different
types of psychological categories, to functional maps and regions,
at finer spatial scale, to functional columns, a still finer spatial scale.
And finally, cell assemblies at virtually the single neuron resolution.
So there's information about what kind of tasks and
outcome processes are happening at all of these scales.
We looked at some examples of some out of sensory maps, and head orientation maps,
all within a millimeter or two of cortex, among others, other examples.
Occulent columns or orientation columns, which again their spatial structure's
quite fine grained, often smaller than the boxels themselves that we're assessing.
So just another note,
on why do topographical maps form in the first place.
So there is some properties of the brain,
the physical structure of the brain that afford the development of
these topographically clustered regions of neurons.
And one of the properties the brain, the neurons have short range excitatory and
inhibitory connections.
And what I've drawn here is two neurons with excitatory axon collateral that
connections them, so this is a pretty common structure you see in the brain.
And that's going to produce similar representations in receptive fields for
similar neurons in similar locations in space.
And the second property is experience-dependent plasticity.
Neurons that fire together wire together, so to speak.
If you put those two things together, and this is an example of a typical sort of
center-surround organization, where we have activation excitation
in a local region around a neuron and then inadmission farther out.
If you put those properties together with learning,
the fire together wire together property, then you can
end up developing networks that develop a spontaneous topographical organization.
And one example of this is the Kohonen network, and what you see this
cactus here is this example, illustration of this model that was trained.
And it starts with a random set of response properties across a different
So, in the MVPA literature, why can it work,
why can it actually detect these finer spatial scale organizational structures?
And there's this idea called hyper-acuity, which is this idea.
So, voxels, a typical voxel, a 3T is about three by three by three millimeters.
Pretty big and each of those voxels might sample on average say five and
a half million neurons.
Many of which do different things.
So the distribution of the neuron types if you will in a voxel might be mixed but
across voxels the precise mix varies.
So this simulation here illustrates this.
This is a homogenous poisson process.
It really represents a random distribution of neurons of two event types,
code for two different properties.
So if I could measure every neuron, it would be really easy to distinguish
when the organism is engaging in event type one versus two.
Just see which neurons are active.
But of course,
we sample these when we sample them in voxel space on this very coarse grid.
But now, we're getting an average.
And the idea here, is that.
Some voxels, even if there's a random chance distribution of the voxels,
some of those voxels were contained more neurons that respond more to event one
than event two, and other voxels the opposite.
And so, across the pattern of voxels, I can actually detect some differences.
And whether we can do this in any case is an empirical challenge and
an empirical question.
But this is a challenge that often real neuro structures are organized like this.
So this is from Daniel Sullivan's lab and Joe Payton, and
their looking at electro physiological recordings the amygdala in monkeys, and
what they see is intermixed neurons that respond to positive and negative events.
And the distribution doesn't seem to be clearly separable.
So it looks a lot like The Homogeneous Poisson process that I
diagrammed here perhaps.
So here's an interesting property.
So even if the neurons are intermixed manually, then MVPA might still be able to
identify patterns that are distinctly and reliably associated with each.
So here's are two intermixed types of neurons.
And the point here is that.
So what's happening, these neurons are actively and
what the bold signal's really picking up on is local field potential.
So now we've smoothed these neurons into a local field potential representation here
for event type one and event type two.
So now you see that topography is still a little bit different.
There's a lot of information there.
And then when we sample that in our discrete voxels, that low resolution,
we can still see that there are different patterns of voxels across those two event
types, even though those two event types were randomly intermixed.
So in fact, the voxel patterns are largely uncorrelated.
Here in this example, the r = 0.15.
So not very highly correlated.
And it's possible to classify the type of event, even though the intermixing
occurs at a much finer spatial scale than the voxels themselves.