This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

来自 约翰霍普金斯大学 的课程

Principles of fMRI 2

72 评分

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

从本节课中

Week 3

This week we will focus on brain connectivity.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

[SOUND] So, in many situations, we like to create networks consisting of large

numbers of non-overlapping brain regions.

So network analysis tries to characterize these networks using a small number of

meaningful summary measures.

You might take a big network and try to summarize it using a few summary measures

that we can compare across subjects or groups.

The hope is that the comparison of these network topologies between groups of

subjects might possibly reveal connectivity or

abnormalities related to neurological or psychiatric disorders.

To recall, networks can be represented using graphs, which are mathematical

structures used to model pair-wise relationships between variables.

They consist of a set of nodes, or vertices, V, and the corresponding links,

or edges, E, that connect pairs of vertices.

So a graph, G, which is equal to V and E, which is the collection of the nodes and

links, may be defined as either undirected or

directed with respect to how the edges connect one vertex to another.

In addition, the edges may be either binary, just 0 or 1, or

weighted, depending on the strength of the connection.

It's generally beneficial to represent a brain network using an nxn matrix,

where n is the number of nodes.

Here the graph nodes are represented by columns and rows of the matrix.

And a link between two nodes, i and j, is represented by matrix element (i,j).

Here's an example of a network.

So this is a bi-directional network between three different nodes, A,

B, and C.

We can write a connection matrix as a 3x3 matrix,

where pi AB represents the strength of the relationship between A and B.

So this will be both in the AB and BA element because it's bi-directional.

And pi BC is similarly the link between B and C, again,

because it's bi-directional, we have an element in both the CB and the BC element.

An adjacency matrix is just a binary form of this, which takes values 1 if

there's a connection between two nodes, and 0 otherwise.

And here we have four 1s, one corresponding to the BA,

another to AB, then one to CB, and then we have to BC.

So before doing network analysis, we have to construct the network and the basic

steps of network construction include first, defining the appropriate nodes.

Then, performing network estimating,

or estimating the connection matrix between the different nodes,

this can be correlations or partial correlations or what not.

Thereafter one thresholds the resulting connection matrix

to get an adjacency matrix.

Once we have the adjacency matrix, we can create the networks, and

we can analyze it using network analysis.

So after constructing a network, we're going to want to quantify parameters

associated with network topology and efficiency.

So any given network measure may characterize one or

more aspects of global and local brain connectivity.

These include aspects of functional integration and segregation and

the ability to quantify the importance of individual brain regions and

test the reliance of the network to insults.

So we might want to look at whether or

not a certain brain region is a hub of activation or what not.

Here's some basic notation that we're going to be needing.

So, let's let little aij represent whether there's a connection between

nodes i and j.

So aij is equal to 0 if there's no connection, and

aij is equal to 1 if there is a connection.

So the degree of a node is equal to the number of links connected with that node.

So let's look at this little cartoon image to the side here.

This is a little network.

Here, A has degree 3 because it's connected to three other nodes.

D also has degree 3 because it's connected to three other nodes, A, C, and E.

But B and E, for example,

only have degree 1 because they're only connected to a single node.

C, finally, is connected to two nodes so it has degree 2.

The next piece of notation is the number of triangles around node i,

this is defined as follows.

So here, A has one triangle because you'll see that we can go from A to D to C and

then back to A, so in three steps, we can return to A.

So this is basically showing a single triangle A, D, and C.

The shortest path length between nodes i and i is defined as d ij.

So for example, the shortest path length between A and

E would be 2 because we always have to go through D to get to E,

and that's the shortest way to get from there.

So now we're going to use these pieces of notation to define

different network properties.

The first one we're going to talk about is the clustering coefficient of the network.

So the clustering coefficient of the network is defined as follows, and

it depends on the number of triangles.

So Ci, in this case, represents the fraction of a node's neighbors that

are also neighbors of each other.

Basically what Ci measures is the number of triangles that you have,

normalized by the total number of possible triangles.

This is a measure of functional segregation in the brain and

represents the ability for specialized processing to occur

within densely interconnected groups of brain regions.

Another network metric that we often use is the characteristic path

link of the network, where Li is the average distance between node i and

all other nodes.

This is just basically the average of the average distance between node i and

all other nodes.

This is a measure of functional integration in the brain, so

this represents the ability to rapidly combine specialized information from

distributed brain regions.

Here, let's look at our example network again,

and look at some of these network metrics.

So here's the network, and here's the adjacency matrix.

And we see it's either 1 or 0.

1 is if there's a link between the two nodes, and 0 if there's no link.

Here's an example of the degree distribution, so

as we saw before, 40% of the nodes, so

two out of the five have a degree distribution of 1, so that's B and E.

They only have a single neighbor.

20%, or one out of the five,

has a degree 2, and that would be in this case C.

And then finally, two out of the five have a degree

network of 3, and that's A and D in this example.

If we look at the clustering coefficient, again remember, for each node,

we can calculate,

it's the number of triangles divided by the total number of possible triangles.

So a's contribution to the clustering coefficient is 0.33.

There's one triangle, the ADC triangle, but there's three possible triangles.

One is the ADC triangle, but another would be the ABC triangle,

which doesn't exist in this case.

Or alternatively, the ABD triangle, which also doesn't exist.

So only one out of the possible three triangles exists,

so A contributes 0.33 there.

B has no triangles, and so it contributes 0.

C has a single triangle and only one possible triangle,

so that contributes 1, etc.

So then we take the average of those and

we get the clustering coefficient C, which is equal to 0.33.

And so this represents how connected the network is.

If we look at the path length, here we can see what's the path length between B and

A, well it's 1.

What's the path length between C and A?

It's 1.

D and A?

It's 1, and E and A, it's 2.

And then we can do this for each of the different unique combinations of nodes,

and we can take the average of this.

And the average path length is 1.6.

So this tells us a little bit about how quickly we can move within the network.

There's lots of different network types, and

they have different clustering coefficients and different path lengths.

Here we see the example of a dense network.

A fragmented network with little subnetworks split up.

An ordered network,

which is very orderly structured in a very fine grain pattern, here.

We have sparse networks, connected networks, and even random networks.

One particular type of network that we often talk about are small world network.

And so the brain is thought to optimize information transfer by maximizing

functional segregation and integration by minimizing the wiring cost.

And small-worldness is used to describe a design that enables distributive

processing and regional specificity.

Conceptually, a small-world network is more clustered than a random network,

while still having approximately the same characteristic path length.

So once we compute different graph properties,

we often want to interpret these graph properties.

And two common means of interpreting graph structures are by

comparing to a benchmark network, so we might test whether or

not the graph properties differ from, say, a random network or not.

Or a comparison to other real brain networks,

maybe we want to characterize different network properties from, say,

network computers from schizophrenics versus healthy controls.

So network analysis has received much attention

in the neuro-imaging literature in the past couple of years.

And it's really one of the hottest areas of research at the moment.

There exist a number of metrics for

describing various characteristics of a network.

We've only touched upon a couple of them,

just to give you a flavor for the type of analysis that one might do.

And the hope is that this can shed light on how certain disorders effect the brain.

Okay, so that's the end of this module.

Here we talked about network analysis.

And in the next couple of modules,

we're going to be start talking about effective connectivity.

Okay, I'll see you then.

Bye.