One Weird Trick to Set P-Values in AFNI

For those new to AFNI (and even those who have been using AFNI for a while, like I have), there is a way to set a specific uncorrected voxel-wise p-value in the AFNI GUI. Simply scroll over the top or bottom area of the p-value threshold slider (i.e., around the area marked "Z-t" or "p=...q=...") and press mouse button #3 (e.g., depressing the mouse wheel for a standard PC mouse). This will bring up a window where you can enter a specific p-value, which automatically sets the slider to correspond to that value. No more guessing, no more tears.

UPDATE: It turns out that you can use any mouse button, not just mouse button #3, to get that same pop-up window.



Resting State Analysis, Part VII: The End

For the last part of the analysis, we are going to simply take all of the z-maps we have just created, and enter them into a second-level analysis across all subjects. To do this, first I suggest creating separate directories for the Autism and Control subjects, based on the phenotypic data provided by the KKI on the ABIDE website; once you have those, run uber_ttest.py to begin loading Z-maps from each subject into the corresponding category of Autism or Control.

Most of this is easier to see than to describe, so I've made a final tutorial video to show how this is all done. Hope it helps!




P.S. I've collated all of the past videos over the past week and a half into a playlist, which should make sorting through everything slightly easier:

Resting State Analysis, Parts V and VI: Creating Correlation Maps and Z-Maps

Note: Refer to Example #9 of afni_proc.py for AFNI's most recent version of resting-state analysis.


Now that we've laid down some of the theory behind resting-state analyses, and have seen that it is nothing more than a glorified functional connectivity analysis, which in turn is nothing more than a glorified bivariate correlation, which in turn is something that I just made up, the time has now come to create the correlation maps and z-maps which we have so craved. I believe I have talked about correlation maps and their subsequent transmogrification into z-maps, but in the interest of redundancy* and also in the interest of showcasing a few ties that I picked up at Goodwill, I've created two more videos to show each of the steps in turn.

First, use 3dmaskave to extract the timecourse information from your ROI placed in the vmPFC:

3dmaskave -quiet -mask vmPFC+tlrc errts.{$subj}+tlrc > timeCourse.txt

This information is then used by 3dfim+ to generate a correlation map:

3dfim+ -input errts.{$subj}+tlrc -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr



Once those correlation maps are generated, use 3dcalc to convert them into z-maps:

3dcalc -a vmPFC_Corr+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj{$subj}_Z





N.B. In each of the above examples, {$subj} is a placeholder for the subject ID you are currently processing; with a few tweaks, you should be able to put this all into a script that automates these processes for each subject.

N.N.B. (I think that's how you do it): The original script that I uploaded had a couple of bugs; one of the placeholders should have been changed to a generic $subj variable, and also -giant_move option has been added to the align_epi_anat.py part of the script, since the anatomical and functional images actually start out quite far away from each other. If you haven't used it yet, downloading the new script should take care of those issues. Also, another hidden change I made was to increase the motion limit from 0.2 to 0.3mm; too many subjects were getting thrown out, and even though a more rigorous analysis would leave the motion threshold at a more conservative 0.2, I've raised it for now, for pedagogical purposes.

N.N.N.B. Find out what "N.B." means.


*Sponsored by the United States Department of Redundancy Department

Resting-State Analysis Part IV: Generating a Seed Region for Resting-State Analysis

Part of the resting-state pipeline includes warping each individual anatomical image into a standardized space, so that results can be compared across subjects. This also means that we can place a seed voxel or seed region into one location of the brain, and it will be roughly in the same spot for every subject in our analysis.

To do this, we will focus on one of the core components of the so-called "resting-state network," which is a reliable pattern of connectivity observed when subjects are at rest. Several studies have revealed patterns of correlative activity between the ventromedial prefrontal cortex (vmPFC) and retrosplenial cortex, which is the network we will be focusing on for this tutorial series; our aim will be to compare correlations between these nodes across persons with autism and a control group without autism.

First, however, we will need to create and place the seed region appropriately. We can place a seed voxel in the vmPFC using the XYZ coordinates 0, -50, -5 (similar to MNI coordinates of 0, +50, -5), and a correlation coefficient will be estimated for every other voxel in the brain. The resulting correlation maps will be created for each subject, and then collapsed across groups and statistically compared against each other.

The procedure for generating an ROI is exactly the same as what was done in a previous post about 3dUndump; we simply put the coordinates into a text file, and tell 3dUndump how large a sphere we want to create around those coordinates.


echo "0, -50, -5" > tmp.txt
3dUndump -prefix vmPFC -master errts.0050783+tlrc -srad 5 -xyz tmp.txt


This will then create a sphere with a 5mm radius around those coordinates, and information about that time series can then be extracted and correlated with other time series in every other voxel in the brain.



Resting State Analysis, Part III: Automating Your Analysis

Once you've set up a resting-state analysis script, either through uber_subject.py or following example #9 in the afni_proc.py documentation, I highly recommend that you set up some sort of higher-level script to automate running that script in each subject's directory. This is especially useful in the dataset we are using, since each individual analysis doesn't necessarily take that long, but we have a large number of subjects.

To begin, navigate to the downloaded KKI directory and use the following command to list each directory without trailing slashes:

ls -d */ | cut -f1 -d'/' > subjList.txt

You can then redirect this output to a text file, which can then be later edited at your leisure; in the above example, I used a redirect command to place all of the directories in a file called subjList.txt.

A for loop can then be used to do the analysis for each subject. (You can use any shell you want, but in this example I will use the t-shell.) Simply use the output of the text file as a variable, then use the for loop to execute the analysis for each subject, e.g.:

setenv subject `cat subjList.txt`
foreach subj ($subject)
cp RSproc.sh $subj/session_1
cd $subj/session_1
tcsh RSproc.sh $subj
cd ../..
end

The RSproc.sh script, generated from the uber_subject.py interface used in the last tutorial, can be found here. Note that I use a motion cutoff threshold of 0.3mm, which is slightly different from the standard 0.2mm cutoff; feel free to alter this if you like.

This should take care of all of your analyses while you go do something else, such as reading a book or shopping for tupperware and nosehair trimmers.* Of course, you will want to examine the output of your commands for any errors, but this menial task can usually be designated to one of your undergraduate RAs slated for resting-state data summoning immolation.




*Or maybe that's just me.

Resting-State Functional Connectivity Analysis, Part II: Setting Up Your Analysis

Once you have downloaded the KKI dataset discussed in the last resting-state post, you have most of what you need, sacrificial undergraduate RA notwithstanding. Also, as I mentioned, we will be using AFNI for this, specifically AFNI's uber_subject.py script which includes an option for preprocessing and analyzing resting-state data.

First, type uber_subject.py from your terminal to open up the GUI (rhymes with "whey"), and select the resting-state option from the preprocessing selections. This will automatically fill in a series of preprocessing steps which our AFNI overlords, in their wisdom, have decided is best for resting-state analyses. Of course, you can change this, but that would be an unbelievably stupid decision, on par with doing something like asking out your girlfriend's sister.

Notice that with resting-state experiments, we avoid several of the annoying, persnickety details endemic to typical FMRI experiments, such as having to actually design an experiment. You simply place the subject inside the scanner, set the scanner to 350 degrees, and leave it for ten minutes. When you come back, you will have a slightly charred piece of carbon that used to be a human being. After framing someone else, such as your FMRI technician, you should then realize that you are simply not cut out for actually carrying out a resting-state scan, and download someone else's data instead from the Internet like I recommended in the first place.

Notice that much of the preprocessing and setting up the design matrix is the similar to usual FMRI analyses. However, there is an important difference in the design matrix setup, because you do not have any tasks or events to model. Instead, the only things you need to model are potential sources of noise, which may include heart rate or respiration data, if you have it, and always motion data, since this can be an insidious confound in any FMRI analysis, but particularly for resting-state analyses.

The upshot of all this is that, whereas in a traditional FMRI analysis AFNI saves the beta estimates and contrasts to a statistics dataset and everything else that wasn't modeled into an error or residual dataset (usually labelled "errts+orig"), in resting-state analyses we are interested in everything that wasn't explicitly modeled - in other words, we will want to focus on what gets output to the errts+orig dataset, since that will contain the timecourses of voxel activity that we are interested in. You can then place ROIs or other seed regions within that dataset, and generate correlation maps from those seed regions.



In the next chapter of the series, we will look more closely at converting these correlation values into z-maps for comparison across groups, as well as where to find more undergraduate RAs if the ones who were working in your lab have already been burnt as offerings to the FMRI gods.

Introduction to Resting-State Functional Connectivity Analysis: You Can Do It! (Maybe)

A while ago I promised to provide a series of tutorials on resting-state connectivity, and I'm happy to announce the first part after a brief delay of, give or take, fourteen months. But in my defense, I wanted to wait until I had everything well prepared and ready to go, by which I mean until I had a new suit, which, obviously, is a critical part of the education process.

For this series, in order to follow along I recommend downloading resting-state data from the ABIDE website, which collects resting-state data from autistic subjects and controls. The end result will be a simple comparison of resting-state networks across both populations, which, although the interpretation of such results has a large farrago of its own issues, should provide a solid platform from which to launch your own resting-state analyses. We will be using AFNI for this demonstration, although in the future I may include an identical analysis using FSL.

First, you need to jump through the usual administrative hoops in order to download the dataset. This involves a simple series of steps, such as registering as a member, filling out a form with your username and password, and immolating a virginal undergraduate RA. The resulting aroma will summon the resting-state data, which can then be downloaded to your computer.

First, register at the NITRC website: http://www.nitrc.org/account/register.php
Two, request access from the following link: http://fcon_1000.projects.nitrc.org/indi/req_access.html
Last, go to the ABIDE website and download a resting-state dataset (I am using data from the Kennedy Krieger Institute): http://fcon_1000.projects.nitrc.org/indi/abide/



As for finding and sacrificing the virginal undergraduate RA, you're on your own.

How to Write Article Reviews (Kickbacks are Involved)

There comes a time when a graduate student is selected for the dubious honor of reviewing an article. This is the "peer-review process," where your academic equals, also known as "peers," or, less commonly, "fellow nerds," pass "judgment" on an "article" to determine if it is "suitable" for "publication." Of course, actually recommending an article for publication rarely "happens," and it is much more common for an article to be "rejected," similar to what happens every other area in your life, such as "work" or "dating."

Clearly, then, publishing articles is a formidable process, with failure all but certain. Because of this, instead of merely relying on the quality and scientific integrity of their work to make it publication-worthy, scientists sometimes resort to other tactics, by which I mean "bribes."

By bribes, I do not mean simply asking for money in exchange for recommending an article for publication. That would be dishonest. And also be careful not to confuse bribes with "extortion," which is also known in certain circles as "publication fees."

Instead, the transfer of money between article authors and reviewers is much more subtle. I should add that if you feel any qualms about asking for bribes, keep in mind that this entire process is implied and condoned, similar to insider trading, cheating at golf, or using your finger to scrape the last bits of Nutella from the bottom of the jar. Everybody does it.

In order to request a bribe through your journal review, however, you need to exercise the utmost caution and tact, carefully placing references to your account number, routing number, and the amount of money that you want, that will only be detected by those who are "in the know." Use the following template for your own reviews.


Reviewer #1: This paper is an interesting, timely, important study on the effects of the default poop network. However, before I recommend it for publication, I have some major reservations, which in no way include asking for bribes.

Major issues:
1) The name of the second author, Betsy Honkswiggle, is identical to the name of a girl that I dated during my sophomore year in college, and I still have particularly bad memories about our breakup, which may or may not have involved a nasty custody struggle over a pet iguana. The negative associations are affecting my ability to objectively review this paper, and I recommend either immediate removal of the author from the paper, or that the author change her name to something more palatable, such as Harriet Beecher Stowe, or Pamela Anderson.
2) In figure 3557285492, the colors used to depict different effects could be changed to be more pleasing to the eye. Right now they are in blue and green, which is somewhat drab; try a different set of colors, such as fuschia or hot lemon.

Minor issues:
1) I realize this may not entirely be the authors' fault, but I have been having some serious itching for the past couple of weeks, which I, for decorum purposes, won't go into more detail about where it is located. I've tried everything, from Tucks to half-and-half to primal scream therapy, but nothing seems to work. Do the authors have any recommendations for how to deal with this? Thanks!
2) When discussing the default poop network, please cite Dr. Will Brown's seminal 1994 paper. 
3) For that matter, please cite the other following papers, which are related to your article. Of course, you don't need to cite all of these, but if you didn't, it'd be a shame if somethin' were to, you know, happen to this nice little article of yours, capisci?
-Fensterwhacker et al, 2011
-Fensterwhacker & Brown, 2009
-Fensterwhacker et al, in press
-Fensterwhacker et al, submitted
-Fensterwhacker & Honkswiggle, in prep
-Fensterwhacker, Bickswatter, & LeQuint, I swear we're very seriously considering doing this study
4) The fact that my name is on all of the preceding citations is purely a coincidence.
5) Also, if you believe that, you are, with all due respect, dumber than tinfoil.
6) On page 12, line 20, "your" should be "you're."
7) On page 16, the authors report an effect that has a cluster size of 348 voxels. This seems a little off to me; for some reason, I think this should be something more like, let's say, 017000371. This must be true, because I am a reviewer.

Once you have addressed all of these concerns, I may allow you to do a second round of reviews, after which I may go and do something completely nuts-o, such as recommending a reject and resubmit.


God bless you all,
Except for the atheists,




Dwayne "Five Thousand Bucks" Fensterwhacker III, Esq.

A Computational Model of Arbitration between Model-Based and Model-Free Learning (Featuring Django Unchained!)

Decision-making has fascinated both neuroscientists and economists for decades; and in particular, what makes this such an intriguing topic isn't when people are making good decisions, but when they are screwing up in major ways. Although making terrible decisions doesn't necessarily bar you from having success - just look at our past six or seven presidents - alleviating terrible decisions can sometimes make your life easier, especially when it comes to avoiding decisions that could be bad for you, such as licking a steak knife.

A recent Neuron paper by Lee, Shimojo, and O'Doherty examined how the brain switches between relying on using habitual actions to make decisions, versus generating a cognitive model of what decisions might be associated with which outcomes, and making a decision based on your prediction about what should be most optimal, similar to making a decision-tree or flowchart outlining all the different possibilities associated with each action. These decision-making strategies are referred to as model-free and model-based decision systems, respectively; and reliance on only one system, especially in a context where that system might be inappropriate, would lead to inefficiencies and sometimes disastrous consequences, such as asking out your girlfriend's sister. O'Doherty, who seems to churn out high-impact journals with the effortlessness of a Pez Dispenser, has been working on these and related problems for a while; and this most recent publication, to me, represents an important step forward in computational modeling and how such decision-making processes are reified in the brain.

Before discussing the paper, let me clarify a couple of important distinctions about the word "errors," particularly since one of the layers of the model discussed in the paper calculates different kinds of error. When computational modelers talk about errors, they can come in multiple forms. The most common description of an error, however, is some sort of discrepancy between what an organism is trying to do, or what an individual is expecting, and what that organism actually does or actually receives. Errors of commission, in particular, have been extensively studied, especially in popular decision-making and reaction-time paradigms such as the Stroop task, which is simply screwing up or making an unintended mistake; but recently other forms of error have been defined, such as reward prediction error, which calculates the discrepancy between what was expected, and what was actually received. The authors contrast this reward prediction error with a related concept called state prediction error, which is the discrepancy between an internal model of the environment and the actual state that someone is in. So, actions that are appropriate or likely to be rewarded in one state, may no longer be valid once the state is detected to have shifted or somehow changed.

While this may sound like so much jargon and namby-pampy scientific argot, state prediction errors and reward prediction errors are actually all around us, if we have eyes to see. To take one example, near the end of Django Unchained, our protagonist, Django, has killed all of Calvin Candie's henchmen in a final climactic shootout in the CandyLand foyer. Stephen, thinking that Django has spent all six revolver rounds in the shootout - including a particularly sadistic dismemberment of Billy Crash - believes that he still has some options left open for dealing with Django, such as continuing to talk trash. However, when Django reveals that he has a second revolver, Stephen's internal model of his environment needs to update to take this new piece of information into account; actions that would have been plausible under the previous state he believed himself to be in are no longer viable.

A reward prediction error, on the other hand, can be observed in the second half of the scene, where Django lights the dynamite to demolish the CandyLand mansion. After walking some distance away from the manse, Django turns around to look at the explosion; clearly, he predicts the house to explode in an enormous fireball, and also predicts it to occur at a certain time. If the dynamite failed to go off, or if it went off far too early or too late, would lead to a prediction error. This distinction between the binary occurrence/non-occurrence of an event, as well as its temporal aspect, has been detailed in a recent computational model of prediction and decision-making behavior by Alexander & Brown (2011), and also illustrates how a movie such as Django Unchained can not only provide wholesome entertainment for the whole family, but also serve as a teaching tool for learning models.


This brings us to the present paper, which attempted to locate where in the brain such an arbitration process is done in order to select a model-based or model-free decision system. A model-free system, as described above, takes the lesser amount of cognitive effort and control, since using habitual or "cached" behaviors to guide decisions is relatively quick and automatic; model-based systems, on the other hand, require more cognitive control and mapping out prospective outcomes associated with each decision, but can be more useful than reflexive behaviors when more reflection is appropriate.

The task required participants to make either a left or right button press, which would make a new icon appear on the screen, and after a few button presses, a coin would appear. However, the coin was only rewarding in certain circumstances; in one condition, or "state," only certain colors of coins would be accepted and turned into rewards, while in the other condition, any type of coin would be rewarding. This was designed to favor either model-free or model-based control in certain situations, and also to compare how an arbitration model would correlate with behavior that either is more flexible under model-based conditions, or more fixed under model-free conditions, using a dynamical threshold to shift behavior from model-based to model-free systems over time. The arbitration model also computes the reliability of the model-based and model-free systems to determine which should be implemented, which is affected by prediction errors on previous trials.

Figure 2 from Lee et al showing how prediction errors are computed and then used to calculate the reliability of either a model-based or model-free system, which in turn affects the probability of implementing either system.

The authors then regressed the computational signals against the FMRI data, in order to see where such computational signals would load onto observed brain activity during trials requiring either more or less model-based or model-free strategies. The reliability signals from the model-free and model-based systems were found to load on the inferior lateral PFC (ilPFC) and right frontopolar cortex (FPC), suggesting that these two cortical regions might be involved in the arbitration process to decide which system to implement, with the more reliable system being weighted more.

Figure 4, ibid, with panel A depicting orthogonal reliability signals for both model-based and model-free systems in bilateral ilPFC. Panel B shows a region of rostral anterior cingulate cortex associated with the difference in reliability between the two systems, and both the ilPFC and right FPC correlated with the highest reliability index for a particular trial for whichever system was implemented during that trial.

Next, a psychophysiological interaction (PPI) analysis was conducted to see whether signals in specific cortical or subcortical regions modulated the activity of model-free or model-based signals, which revealed that when the probability of a model-free state was high, there was a corresponding negative correlation between both the ilPFC and right FPC and regions of the putamen also observed to encode model-free signals; significantly, no effects were found for the reverse condition when the probability of model-based activity was high, suggesting that the arbitrator functions primarily by affecting the model-free system.

In total, these results suggest that reliability signals for different decision systems are modulated by activity in the frontocortical regions, and that signals for the model-based and model-free systems themselves are encoded by several different cortical regions, including the orbital PFC for model-based system activity, and supplementary motor area and dorsolateral PFC for model-free activity. In addition, the ventromedial PFC appears to encode a weighted signal of both model-based and model-free signals, tying together how subcortical and value-computing structures may influence the decision to either implement a model-based or model-free system, incorporating reliability information from frontopolar regions about which system should be used. Which, on the face of it, can be particularly useful when dealing with revolver-wielding, dynamite-planting psychopaths.


Link to paper

Saving Cluster-Corrected Images in SPM

Once you've loaded up a contrast in SPM and thresholded it at a particular correction threshold, such as cluster corrected or FWE corrected, you can then simply save that image into a .hdr/.img file for use in another viewer, such as MRIcron. Just push the save button in the results window, and there you have it. This image can also be used for other purposes, such as doing an ROI analysis within only those voxels passing a corrected threshold for a given contrast.

Cluster-corrected image in the SPM interface

Same image, but saved out into .hdr/.img format and viewed in MRIcron.

In case it isn't clear how to do this from the above, I've also made an instructional video about how to do it, which - if you can believe it - was done in only one take. I have my little victories.