Resting State Analysis, Part VII: The End

For the last part of the analysis, we are going to simply take all of the z-maps we have just created, and enter them into a second-level analysis across all subjects. To do this, first I suggest creating separate directories for the Autism and Control subjects, based on the phenotypic data provided by the KKI on the ABIDE website; once you have those, run to begin loading Z-maps from each subject into the corresponding category of Autism or Control.

Most of this is easier to see than to describe, so I've made a final tutorial video to show how this is all done. Hope it helps!

P.S. I've collated all of the past videos over the past week and a half into a playlist, which should make sorting through everything slightly easier:

Resting State Analysis, Parts V and VI: Creating Correlation Maps and Z-Maps

Note: Refer to Example #9 of for AFNI's most recent version of resting-state analysis.

Now that we've laid down some of the theory behind resting-state analyses, and have seen that it is nothing more than a glorified functional connectivity analysis, which in turn is nothing more than a glorified bivariate correlation, which in turn is something that I just made up, the time has now come to create the correlation maps and z-maps which we have so craved. I believe I have talked about correlation maps and their subsequent transmogrification into z-maps, but in the interest of redundancy* and also in the interest of showcasing a few ties that I picked up at Goodwill, I've created two more videos to show each of the steps in turn.

First, use 3dmaskave to extract the timecourse information from your ROI placed in the vmPFC:

3dmaskave -quiet -mask vmPFC+tlrc errts.{$subj}+tlrc > timeCourse.txt

This information is then used by 3dfim+ to generate a correlation map:

3dfim+ -input errts.{$subj}+tlrc -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr

Once those correlation maps are generated, use 3dcalc to convert them into z-maps:

3dcalc -a vmPFC_Corr+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj{$subj}_Z

N.B. In each of the above examples, {$subj} is a placeholder for the subject ID you are currently processing; with a few tweaks, you should be able to put this all into a script that automates these processes for each subject.

N.N.B. (I think that's how you do it): The original script that I uploaded had a couple of bugs; one of the placeholders should have been changed to a generic $subj variable, and also -giant_move option has been added to the part of the script, since the anatomical and functional images actually start out quite far away from each other. If you haven't used it yet, downloading the new script should take care of those issues. Also, another hidden change I made was to increase the motion limit from 0.2 to 0.3mm; too many subjects were getting thrown out, and even though a more rigorous analysis would leave the motion threshold at a more conservative 0.2, I've raised it for now, for pedagogical purposes.

N.N.N.B. Find out what "N.B." means.

*Sponsored by the United States Department of Redundancy Department

Resting State Analysis, Part III: Automating Your Analysis

Once you've set up a resting-state analysis script, either through or following example #9 in the documentation, I highly recommend that you set up some sort of higher-level script to automate running that script in each subject's directory. This is especially useful in the dataset we are using, since each individual analysis doesn't necessarily take that long, but we have a large number of subjects.

To begin, navigate to the downloaded KKI directory and use the following command to list each directory without trailing slashes:

ls -d */ | cut -f1 -d'/' > subjList.txt

You can then redirect this output to a text file, which can then be later edited at your leisure; in the above example, I used a redirect command to place all of the directories in a file called subjList.txt.

A for loop can then be used to do the analysis for each subject. (You can use any shell you want, but in this example I will use the t-shell.) Simply use the output of the text file as a variable, then use the for loop to execute the analysis for each subject, e.g.:

setenv subject `cat subjList.txt`
foreach subj ($subject)
cp $subj/session_1
cd $subj/session_1
tcsh $subj
cd ../..

The script, generated from the interface used in the last tutorial, can be found here. Note that I use a motion cutoff threshold of 0.3mm, which is slightly different from the standard 0.2mm cutoff; feel free to alter this if you like.

This should take care of all of your analyses while you go do something else, such as reading a book or shopping for tupperware and nosehair trimmers.* Of course, you will want to examine the output of your commands for any errors, but this menial task can usually be designated to one of your undergraduate RAs slated for resting-state data summoning immolation.

*Or maybe that's just me.

Future Functional Connectivity Tutorials, and Other Updates

A few notes:

1) The previous functional connectivity posts and tutorials are cribbed from Gang Chen's homepage, which is available here. Kind of like the way I crib quotations and passages from authors that no one reads anymore, and then pass it off as my own style to boost my pathologically low self-esteem. Keep in mind that most of these demonstrations deal with a single subject and simplified situations that you probably will not encounter in your research. Given these contrived examples, most of the results generated in these demos are relatively meaningless; it's up to you to learn and understand the concepts, and then apply them to your own data and make your own inferences. My task which I am trying to achieve is, by the power of Youtube tutorials, to make you hear, to make you feel — it is, before all, to make you understand. That — and no more, and it is everything. (That was Conrad, by the way.)

2) A lot of you - I'm talking a LOT of you players - have been making requests for MELODIC tutorials and resting state analyses in FSL. All I can say is, we'll get there, in time. Before that, however, I believe AFNI is better suited for building up one's intuition, and so we will be working through a few more connectivity topics in AFNI - specifically, context-dependent correlations, beta series correlations, and resting state connectivity. After that we will again cover the same concepts, but applied in FSL - by which time, given my glacial pace, either FMRI will have become a passé technique or the Andromeda galaxy will have crashed into us.

3) Recently you may have noticed the "Donate" button on the right sidebar of the blog. This was done at the request of one reader who felt the powerful, irrational urge to loosen his purse-strings and give some alms out of the goodness of his heart, which is located somewhere way, way down there, somewhere nearabouts the cockles. Although I can't fully understand this behavior - even less than I can understand why there is someone who still has purse-strings, or what cockles are, exactly - nevertheless it helps satisfy my cupidity and strokes my ego. Furthermore, in addition to serving Mammon, these tokens of gratitude motivate me to regularly produce new material and, as a bonus, help me to continue procrastinating on my dissertation. Now that's what I call a win-win-win.

4) Also, at least one of you has mailed me a two-pack of Nutella. This has pleased me greatly. My brain needs hazelnut spread for fuel, and the more it has, the hotter and better it burns.

5) If everything goes according to plan, we should cover context-dependent correlations this weekend, beta series correlations next week, and resting-state connectivity the week after that.

Lunch in Paris, dinner in London, comrade.