Welcome back, we're going to continue our discussion on meta-analysis to talk about how we can use meta-analysis to improve new imaging results in our inferences. So how can meta-anaylsis help? Well first to recap, we talked about why meta-anaylis is important in solving the crisis, or helping eliminate the crisis of replicability and interpretability, and helping us to build a cumulative science. Now we talked about some problems with brain mapping approaches, problems of definition and replication, and the need for exact replication. We talked about problems of diagnostic value, including this problem of what does the brain imply about categories of mental events? Which breaks up into, what is the probability of the psychological event, given the brain marker, which is related to sensitivity. And especially related to what we call forward inference. And the problem is, we don't know how big the effects of the manipulations are, or what the sensitivity actually is. And we talked about the problem of specificity. Which is related to reverse inference especially. And the problem is we don't know if the observed patterns are specific enough to be useful as biomarkers, or to interpret mental events. And this relates especially to their probability of observing a brain marker, in the absence of a psychological category of interest. So we can't replicate results across laboratories without establishing precise a priori hypotheses. But which hypotheses should we choose? Well one of the first studies I did was a study where we were looking in the anterior cingulate cortex. We went to the literature. it was a very small literature at the time. And we said, let's come up with an anterior cingulate region of interest. But even then, there are many possible coordinates to choose from to make a sphere around. So what we're seeing here is, I can choose to make a sphere, a region of interest sphere, around any number of different coordinates in the cingulate, which allows me huge flexibility. I have no idea which I should choose because the coordinates are all over the place. So this is a real problem. Which do I choose? And meta-analysis can help us find a consensus solution, which is the average across them. So prior findings reflect a mix of true and false positives. Not all these coordinates are true positives. And not all of them are in the locations where they really should be, right, there's noise in the spatial process as well. So testing all of them causes a multiple comparisons problem, causes me to lose power, and/or find more false positives. It also inflates the effect sizes due to voxel selection bias. But if I chose a priori regions or patterns from meta-analysis that can solve both the problems. And let's look at power problems. And this underscored how important it is to choose A RLIs. So power is the proportion of truly activated voxels in which you expect to find an effect. And here are some power curves plotted as a function of the core effect size in terms of correlation units. For different sample sizes. And this is at 0.001 uncorrected which is a fairly standard, in fact the most common threshold, uncorrected threshold [COUGH] to use in published studies. [COUGH] It's the most common threshold used in published studies. So with an effect size of r = 0.05, that's a strong correlation. We need 50 or more subjects to achieve 80% power even at this uncorrected threshold. That's a lot. With an effect size of r equal 0.3, which is very reasonable for many effects in social sciences, I only have 40% power with 100 subjects. And in a typical scenario, where I have effect size of 0.3 and only 30 subjects, power is only 5%. This is even without correcting for multiple comparisons. Multiple comparison corrections makes it much worse. So power is abysmally low, but if we can come up with an a priori region, or pattern of interest we can reduce the multiple comparisons problem or eliminate it entirely. So here's a practical example from a study that we published recently. This is a study of threat effects on working memory. So what we have is, we have some judges who are evaluating each participant. And the participants come in and give a speech in front of these judges, very stressful. And then they go in to the scanner and they do some working memory tasks like this N-back task. So what happens? Well social evaluative threat impairs people's working memory in the study. So what you see here in the dark bars is that accuracy is lower. Reaction times are also higher when people are in the threat condition. And so let's look at some brain effects now. Now let's look at the study results. So here's the N-back versus rest in our study. Yellow is increases with working memory, blue is decreases. So the maps look really nice and the first thing that we can do that's useful is compare this to what the working memory map from Neurosynth from previous studies looks like. And as you can see here, it actually looks very similar. So this gives us a lot of confidence that we have good results in this study. And now we can take a pattern of interest. So this generalizes the region of interest, idea, and for a region of interest, you simply average the signal in the voxels in the ROI. Well, here, we're going to take a pattern response, and we're really going to multiply the maps from my individual subjects by the template from the neurosynth, the working memory template. And I'll get one number which reflects the strength of activation in that particular a priori meta-analytic pattern. So this is a way of taking many regions where we expect to find results and doing one test on average. So this gives us a scalar measure of activation for each subject in the pattern of interest. It's similar to the correlation between the two patterns, but it also includes magnitude information and it generalizes the region of interest approach. So now when we do that we can first look at what happens with activity in the working memory pattern. Do we get significant in our a priori pattern and for the control subjects we see this is the case and it look's very strong. And in fact 100% of the subjects show activation in the correct direction which is great. Now we can compare that to the social threat, the stress subjects, and this gives us an unbiased comparison. And there's what happens with the social threat. There's reduced activation in this working memory related pattern. So now I've taken whole brain images and I've really boiled it down to one test that's based on a a priori pattern. So first of all to recap, this serves as a positive control we have activation in this known frontal priotal network. In 100% of the participants and secondly, we can do this test of the hypothesis about the effects of stress without having a multiple comparisons problem because we know where to look. So now let's look at the diagnostic value issue. So some of the big questions in our field, as I said before, boil down to looking at a brain map like this, and making some inferences about what task or state is this and what outcome is it related to? What does these findings imply about the organization of mental functions? And for psychology, behavioral health these are difficult questions to answer. And we can really only enter the well by looking across many studies and using Meta-analysis. So, here's the insula for example. And as we have seen before, many people have claimed that the insula is a marker for disgust. So here's one example. It's called a specific neural substrate for facial expressions of disgust and that's the insula. But of course, then other studies have looked at this and found that the insula is not specifically involved in disgust processing. So we can use meta-analysis to get a consensus. So what I'd like to know is what's the reverse inference here? Does my study induce disgust looking at that brain map. And what we can see here is the probability of finding insula activity given that I'm disgusted is the sensitivity. But now I’d like to make the reverse inference and evaluate the positive break to value. What's the probability that I was experiencing disgust or the participants were given activity in this brain map. This requires knowing what the sensitivity and the specificity is. What I'd like to know is that the insula is only activated by disgust, and if it is, it's not activated by fear, attention, other emotions, etc., then it's likely to have a high positive predictive value. So the positive predictive value is high for disgust, only if its sensitive to disgust and not activated by other things as well. And this can be uniquely addressed by meta-analysis. So this is an early meta-analysis of the insula, and what we see is activation points in a variety of emotions. Fear and anxiety, aggression, guilt, happiness, sadness and yes disgust, but it's activated by all of these things with about equal proportions. So observing activity in the insula doesn't tell me anything by itself about which of these emotions I might be feeling. Now if you look more broadly in fact whether I'm really feeling an emotion or not at all. So let's go back to what task or state is this? I got the anterior cingulate, maybe that's pain. [LAUGH] Pain activates the anterior cingulate and there are no susceptive neurons in the anterior cingulate. So we can apply the same logic but we also find using meta analysis that anterior cingulate activity and inter cingulate activity as well are not specific for pain. So what you're seeing here in this plot is a summary of meta-analyses of different domains. Each with at least 40 studies or so per task. So they range from emotion to cognitive inhibition response inhibition, long-term memory, coding and retrieval pain, shifting attention among objects and attributes, and working memory, basic maintenance of information in memory. And as you can see, all of these types of studies activate the anterior cingulate and the anterior insula to a large degree. However, we can also look for areas that are more specific. So here were looking at the posterior insula and this is where the no susceptive inference come in and this is an area that should be pain specific. And indeed it is pain specific here. The base rate of activation in these other areas is very low the activation is very high for pain. And so then this allows us to, across the set of studies, actually calculate what's the positive predictive value. If I get S-2 activity, what's the likelihood that I'm experiencing pain versus these other defined alternatives. And here doing the calculation yields a value close to 0.9. So we say the chances are, relative to these categories if I get activation in S-2 and posterior insula, chances are close to 90% that this is a pain task. And in fact in this case it is. So we look at the study map, and if we apply. One thing we can do is we can apply the terms from neurosynth to this map by simply, just like the pattern of interest analysis, taking the dot product of the map that I'm feeding in. In this case my study map. And maps for the top terms and topics in Neurosynth. And here the top terms are painful, somatosensory, motor, tapping, pain related head, sensory, articulatory movements, unpleasantness, and others. So that can help constrain the space of my inferences. So the truth here it is, heat pain versus rest, as I said. So it works out. I showed you an example earlier of where this nurse led you to the wrong conclusion so we have to be very careful about how we use this information but it is an information, nonetheless. That's the end of the module. I hope you enjoyed hearing about meta-analysis. [SOUND]