Converting T-Maps to Z-Maps

Mankind craves unity - the peace that comes with knowing that everyone thinks and feels the same. Religious, political, social endeavors have all been directed toward this same end; that all men have the same worldview, the same Weltanschauung. Petty squabbles about things such as guns and abortion matter little when compared to the aim of these architects. See, for example, the deep penetration into our bloodstream by words such as equality, lifestyle, value - words of tremendous import, triggering automatic and powerful reactions without our quite knowing why, and with only a dim awareness of where these words came from. That we use and respond to them constantly is one of the most astounding triumphs of modern times; that we could even judge whether this is a good or bad thing has already been rendered moot. Best not to try.

It is only fitting, therefore, that we as neuroimagers all "get on the same page" and learn "the right way to do things," and, when possible, make "air quotes." This is another way of saying that this blog is an undisguised attempt to dominate the thoughts and soul of every neuroimager - in short, to ensure unity. And I can think of no greater emblem of unity than the normal distribution, also known as the Z-distribution - the end, the omega, the seal of all distributions. The most vicious of arguments, the most controversial of ideas are quickly resolved by appeal to this monolith; it towers over all research questions like a baleful phallus.

There will be no end to bantering about whether to abolish the arbitrary nature of p less than 0.05, but the bantering will be just that. The standard exists for a reason - it is clear, simple, understood by nearly everyone involved, and is as good a standard as any. A multitude of standards, a deviation from what has become so steeped in tradition, would be chaos, mayhem, a catastrophe. Again, best not to try.

I wish to clear away your childish notions that the Z-distribution is unfair or silly. On the contrary, it will dominate your research life until the day you die. Best to get along with it. The following SPM code will allow you to do just that - convert any output to the normal distribution, so that your results can be understood by anyone. Even by those who disagree, or wish to disagree, with the nature of this thing, will be forced to accept it. A shared Weltanschauung is a powerful thing. The most powerful.


=============

The following Matlab snippet was created by my adviser, Josh Brown. I take no credit for it, but I use it frequently, and believe others will get some use out of it. The calculators in each of the major statistical packages - SPM, AFNI, FSL - all do the same thing, and this is merely one application of it. The more one gets used to applying these transformations to achieve a desired result, the more intuitive it becomes to work with the data at any stage - registration, normalization, statistics, all.


%
% Usage:  convert_spm_stat(conversion, infile, outfile, dof)
%
% This script uses a template .mat batch script object to
% convert an SPM (e.g. SPMT_0001.hdr,img) to a different statistical rep.
% (Requires matlab stats toolbox)
%
%  Args:
%  conversion -- one of 'TtoZ', 'ZtoT', '-log10PtoZ', 'Zto-log10P',
%               'PtoZ', 'ZtoP'
%  infile -- input file stem (may include full path)
%  outfile -- output file stem (may include full pasth)
%  dof -- degrees of freedom
%
% Created by:           Josh Brown 
% Modification date:    Aug. 3, 2007
% Modified: 8/21/2009 Adam Krawitz - Added '-log10PtoZ' and 'Zto-log10P'
% Modified: 2/10/2010 Adam Krawitz - Added 'PtoZ' and 'ZtoP'

function completed=convert_spm_stat(conversion, infile, outfile, dof)

old_dir = cd();

if strcmp(conversion,'TtoZ')
    expval = ['norminv(tcdf(i1,' num2str(dof) '),0,1)'];
elseif strcmp(conversion,'ZtoT')
    expval = ['tinv(normcdf(i1,0,1),' num2str(dof) ')'];
elseif strcmp(conversion,'-log10PtoZ')
    expval = 'norminv(1-10.^(-i1),0,1)';
elseif strcmp(conversion,'Zto-log10P')
    expval = '-log10(1-normcdf(i1,0,1))';
elseif strcmp(conversion,'PtoZ')
    expval = 'norminv(1-i1,0,1)';
elseif strcmp(conversion,'ZtoP')
    expval = '1-normcdf(i1,0,1)';
else
    disp(['Conversion "' conversion '" unrecognized']);
    return;
end
    
if isempty(outfile)
    outfile = [infile '_' conversion];
end

if strcmp(conversion,'ZtoT')
    expval = ['tinv(normcdf(i1,0,1),' num2str(dof) ')'];
elseif strcmp(conversion,'-log10PtoZ')
    expval = 'norminv(1-10.^(-i1),0,1)';
end

%%% Now load into template and run
jobs{1}.util{1}.imcalc.input{1}=[infile '.img,1'];
jobs{1}.util{1}.imcalc.output=[outfile '.img'];
jobs{1}.util{1}.imcalc.expression=expval;

% run it:
spm_jobman('run', jobs);

cd(old_dir)
disp(['Conversion ' conversion ' complete.']);
completed = 1;



Assuming you have a T-map generated by SPM, and 25 subjects that went into the analysis, a sample command might be:

convert_spm_stat('TtoZ', 'spmT_0001', 'spmZ_0001', '24')

Note that the last argument is degrees of freedom, or N-1.


Slice Analysis of FMRI Data with SPM





Slice analysis is a simple procedure - first you take a jar of peanut butter and a jar of Nutella, and then use a spoon to take some Nutella and then use the same spoon to mix it with the peanut butter. Eat and repeat until you go into insulin shock, and then...

No, wait! I was describing my midnight snack. The actual slice analysis method, although less delicious, is infinitely more helpful in determining regional dissociations of activity, as well as avoiding diabetes. (Although who says they can't both be done at the same time?)

The first step is to extract contrast estimates for each slice from a region of interest (ROI, also pronounced "ROY") and then average across all the voxels in that slice for the subject. Of course, there is no way you would be able to do this step on your own, so we need to copy someone else's code from the Internet and adapt it to our needs; one of John Ashburner's code snippets (#23, found here) is a good template to start with. Here is my adaptation:



rootdir = '/data/drill/space10/PainStudy/fmri/'; %Change these to reflect your directory structure
glmdir = '/RESULTS/model_RTreg/'; %Path to SPM.mat and mask files

subjects = [202:209 211:215 217 219 220:222 224:227 229 230 232 233];
%subjects = 202:203;

Conditions.names = {'stroopSurpriseConStats', 'painSurpriseConStats'}; %Replace with your own conditions
Masks = {'stroopSurpriseMask.img', 'painSurpriseMask.img'}; %Replace with your own masks; should be the product of a binary ROI multiplied by your contrast of interest
Conditions.Contrasts = {'', ''};

ConStats = [];
Condition1 = [];
Condition2 = [];

for i=subjects
    
    cd([rootdir num2str(i) glmdir])
    outputPath = [rootdir num2str(i) glmdir]; %Should contain both SPM.mat file and mask files
    
    for maskIdx = 1:length(Masks)
      
    P = [outputPath Masks{(maskIdx)}];

    V=spm_vol(P);

    tmp2 = [];
    
     [x,y,z] = ndgrid(1:V.dim(1),1:V.dim(2),0);
     for i=1:V.dim(3),
       z   = z + 1;
       tmp = spm_sample_vol(V,x,y,z,0);
       msk = find(tmp~=0 & isfinite(tmp));
       if ~isempty(msk),
         tmp = tmp(msk);
         xyz1=[x(msk)'; y(msk)'; z(msk)'; ones(1,length(msk))];
         xyzt=V.mat(1:3,:)*xyz1;
         for j=1:length(tmp),
           tmp2 = [tmp2; xyzt(1,j), xyzt(2,j), xyzt(3,j), tmp(j)];
         end;
       end;
     end;

         xyzStats = sortrows(tmp2,2); %Sort relative to second column (Y column); 1 = X, 3 = Z
         minY = min(xyzStats(:,2));
         maxY = max(xyzStats(:,2));

         ConStats = [];

     for idx = minY:2:maxY
         x = find(xyzStats(:,2)==idx); %Go in increments of 2, since most images are warped to this dimension; however, change if resolution is different
         ConStats = [ConStats; mean(xyzStats(min(x):max(x),4))];
     end

    if maskIdx == 1
        Condition1 = [ConStats Condition1];
    elseif maskIdx == 2
        Condition2 = [ConStats Condition2];
    end

    end
end

Conditions.Contrasts{1} = Condition1;
Conditions.Contrasts{2} = Condition2;


This script assumes that there are only two conditions; more can be added, but care should be taken to reflect this, especially with the if/else statement near the end of the script. I could refine it to work with any amount of conditions, but that would require effort and talent.

Once these contrasts are loaded into your structure, you can then put them in an Excel spreadsheet or any other program that will allow you to format and save the contrasts in a tab-delimited text format. The goal is to prepare them for analysis in R, where you can test for main effects and interactions across the ROI for your contrasts. In Excel, I like to format it in the following four-column format:


Subject Condition Position  Contrast
202 Stroop 0 -0.791985669
202 Stroop 2 -0.558366941
202 Stroop 4 -0.338829942
202 Pain 0 0.17158524
202 Pain 2 0.267789503
202 Pain 4 0.192473782
203 Stroop 0 0.596162455
203 Stroop 2 0.44917655
203 Stroop 4 0.410870348
203 Pain 0 0.722974284
203 Pain 2 0.871030304
203 Pain 4 1.045700207


And so on, depending on how many subjects, conditions, and slices you have. (Note here that I have position in millimeters from the origin in the y-direction; this will depend on your standardized space resolution, which in this case is 2mm per slice.)

Once you export that to a tab-delimited text file, you can then read it into R and analyze it with code like the following:

setwd("~/Desktop")
x = read.table("SliceAnalysis.txt", header=TRUE)
x$Subject <- as.factor="" font="" ubject="" x="">
aov.x = aov(Contrast~(Condition*Position)+Error(Subject/(Condition*Position)),x)
summary(aov.x)
interaction.plot(x$Position, x$Condition, x$Contrast)


This will output statistics for main effects and interactions, as well as plotting the contrasts against each other as a function of position.

That's it! Enjoy your slices, crack open some jars of sugary products, and have some wild times!






Automating SPM Contrasts

Manually typing in contrasts in SPM is a grueling process that can have a wide array of unpleasant side effects, including diplopia, lumbago, carpal tunnel syndrome, psychosis, violent auditory and visual hallucinations, hives, and dry mouth. These symptoms are only compounded by the number of regressors in your model, and the number of subjects in your study.

Fortunately, there is a simply way to automate all of this - provided that each subject has the same number of runs, and that the regressors in each run are structured the same way. If they are, though, the following approach will work.

First, open up SPM and click on the TASKS button in the upper right corner of the Graphics window. The button is marked "TASKS" in capital letters, because they really, really want you to use this thing, and mitigate all of the damage and harm in your life caused by doing things manually. You then select the Stats menu, then Contrast Manager. The options from there are straightforward, similar to what you would do when opening up the Results section from the GUI and typing in contrasts manually.

When specifying the contrast vector, take note of how many runs there are per subject. This is because we want to take the average parameter estimate for each regressor we are considering; one can imagine a scenario where one of the regressors occurs in every run, but the other regressor only happens in a subset of runs, and this more or less puts them on equal footing. In addition, comparing the average parameter or contrast estimate across subjects is easier to interpret.

Once you have the settings to your satisfaction, save it out as a .mat file - for example, 'RunContrasts.mat'. This can then be loaded from the command line:

load('RunContrasts')

Which will put a structure called "jobs" in your workspace, which contains all of the code needed to run a first-level contrast. The only part of it we need to change when looping over subjects is the spmmat field, which can be done with code like the following:

subjList=[207 208]; %And so on, including however many subjects you want

for subj=subjList

    jobs{1}.stats{1}.con.spmmat =     {['/data/hammer/space4/MultiOutcome2/fmri/' num2str(subj) '/RESULTS/model_multiSess/SPM.mat']} %This could be modified so that the path is a variable reflecting where you put your SPM.mat file
    spm_jobman('run', jobs)

end

This is demonstrated in the following pair of videos; the first, showing the general setup, and the second showing the execution from the command line.





Running SPM First-Level Analyses from the Command Line

For most people, doing first-level analyses in SPM is tedious in the extreme; for other people, these analyses are manageable through command line scripting in Matlab. And there are still other people who don't even think about first-level analyses or SPM or Matlab or any of that. This last class of people are your college classmates who majored in something else, like economics, and are living significantly less stressful lives than you, making scads of money, and regularly eating out at restaurants where they serve shrimp the size of Mike Tyson's forearm.

Assuming that you belong to the first category, however, first-level analyses are invariably a crashing bore; and, worse, completely impractical for something like beta-series analysis, where a separate regressor is needed for individual events.

The following script will do all of that for you, although it makes many assumptions about your timing files and where everything is stored. First, your directory needs to be set up with a root directory containing all of the subjects, as well as a separate timing directory containing your onsets files; and second, that the timing files are written in a four-column format of runs, regressors, onsets (in seconds), and duration. Once you have all of that, though, all it takes is simply modifications of the script to run your analysis.

As always, let me know if this actually helps, or whether there are sections of code that can be cleaned up. I will be making a modification tomorrow allowing for beta series analysis by converting all of the timing events into individual regressors, which should help things out.

Link to script: http://mypage.iu.edu/~ajahn/docs/Run_SPM_FirstLevel.m

% INPUT:
%  rootDir        - Path to where subjects are located
%  subjects       - Vector containing list of subjects (can concatenate with brackets)
%  spmDir         - Path to folder where you want to move the multiple conditions .mat files and where to the output SPM file (directory is created if it doesn't
%  exist)
%  timingDir      - Path to timing directory, relative to rootDir
%  timingSuffix   - Suffix appended to timing files
%  matPrefix      - Will be prefixed to output .mat files.
%  dataDir        - Directory storing functional runs.
%
% OUTPUT:
%  One multiple condition .mat file per run. Also specifies the GLM and runs
%  beta estimation.
%       
%
%       Note that this script assumes that your timing files are organized
%       with columns in the following order: Run, ConditionName, Onset (in
%       seconds), Duration (in seconds)
%
%       Ideally, this timing file should be written out from your
%       presentation software, e.g., E-Prime
%
%       Also note how the directory tree is structured in the following example. You may have to
%       change this script to reflect your directory structure.
%       
%       Assume that:
%         rootdir = '/server/studyDirectory/'
%         spmDir = '/modelOutput/'
%         timingDir = '/timings/onsets/'
%         subjects = [101]
%         timingSuffix = '_timing.txt'
%       1) Sample path to SPM directory:
%       '/server/studyDirectory/101/modelOutput/'
%       2) Sample path to timing file in the timing directory:
%       '/server/studyDirectory/timings/onsets/101_timing.txt'
%
% Andrew Jahn, Indiana University, June 2014
% andysbrainblog.blogspot.com

clear all

%%%----Things you will want to change for your study----%%%

rootDir = '/data/hammer/space5/ImagineMove/fmri/';
subjects = [214 215];
spmDir = '/RESULTS/testMultis/';
timingDir = 'fMRI_behav/matlab_input/';
timingSuffix = '_spm_mixed.txt';
matPrefix = 'testMulti';
dataDir = '/RESULTS/data/';

%Change these parameters to reflect study-specific information about number
%of scans and discarded acquisitions (disacqs) per run
funcs = {'swuadf0006.nii', 'swuadf0007.nii', 'swuadf0008.nii'}; %You can add more functional runs; just remember to separate them with a comma
numScans = [240 240 240]; %This if the number of scans that you acquire per run
disacqs = 2; %This is the number of scans you later discard during preprocessing
numScans = numScans-disacqs;
TR = 2; %Repetition time, in seconds

%Estimating the GLM can take some time, particularly if you have a lot of betas. If you just want to specify your
%design matrix so that you can assess it for singularities, turn this to 0.
%If you wish to do it later, estimating the GLM through the GUI is very
%quick.
ESTIMATE_GLM = 1;


%%%-----------------------------------------------------%%%



%For each subject, create timing files and jobs structure
for subject = subjects
    
    %See whether output directory exists; if it doesn't, create it
    outputDir = [rootDir num2str(subject) spmDir];
    
    if ~exist(outputDir)
        mkdir(outputDir)
    end
     
    %---Navigate to timing directory and create .mat files for each run---%
    cd([rootDir timingDir]);
    
    fid = fopen([num2str(subject) timingSuffix], 'rt');
    T = textscan(fid, '%f %s %f %f', 'HeaderLines', 1); %Columns should be 1)Run, 2)Regressor Name, 3) Onset Time (in seconds, relative to start of each run), and 4)Duration, in seconds
    fclose(fid);
    
    clear runs nameList names onsets durations sizeOnsets %Remove these variables if left over from previous analysis
    
    runs = unique(T{1});

    %Begin creating jobs structure
    jobs{1}.stats{1}.fmri_spec.dir = cellstr(outputDir);
    jobs{1}.stats{1}.fmri_spec.timing.units = 'secs';
    jobs{1}.stats{1}.fmri_spec.timing.RT = TR;
    jobs{1}.stats{1}.fmri_spec.timing.fmri_t = 16;
    jobs{1}.stats{1}.fmri_spec.timing.fmri_t0 = 1;

    %Create multiple conditions .mat file for each run
    for runIdx = 1:size(runs, 1)
            nameList = unique(T{2});
            names = nameList';
            onsets = cell(1, size(nameList,1));
            durations = cell(1, size(nameList,1));
            sizeOnsets = size(T{3}, 1);
        for nameIdx = 1:size(nameList,1)
            for idx = 1:sizeOnsets
                if isequal(T{2}{idx}, nameList{nameIdx}) && T{1}(idx) == runIdx
                    onsets{nameIdx} = double([onsets{nameIdx} T{3}(idx)]);
                    durations{nameIdx} = double([durations{nameIdx} T{4}(idx)]);
                end
            end
            onsets{nameIdx} = (onsets{nameIdx} - (TR*disacqs)); %Adjust timing for discarded acquisitions
        end

        save ([matPrefix '_' num2str(subject) '_' num2str(runIdx)], 'names', 'onsets', 'durations')
        
        %Grab frames for each run using spm_select, and fill in session
        %information within jobs structure
        files = spm_select('ExtFPList', [rootDir num2str(subject) dataDir], ['^' funcs{runIdx}], 1:numScans);

        jobs{1}.stats{1}.fmri_spec.sess(runIdx).scans = cellstr(files);
        jobs{1}.stats{1}.fmri_spec.sess(runIdx).cond = struct('name', {}, 'onset', {}, 'duration', {}, 'tmod', {}, 'pmod', {});
        jobs{1}.stats{1}.fmri_spec.sess(runIdx).multi = cellstr([outputDir matPrefix '_' num2str(subject) '_' num2str(runIdx) '.mat']);
        jobs{1}.stats{1}.fmri_spec.sess(runIdx).regress = struct('name', {}, 'val', {});
        jobs{1}.stats{1}.fmri_spec.sess(runIdx).multi_reg = {''};
        jobs{1}.stats{1}.fmri_spec.sess(runIdx).hpf = 128;
        
                
    end
    
    movefile([matPrefix '_' num2str(subject) '_*.mat'], outputDir)
    
    %Fill in the rest of the jobs fields
    jobs{1}.stats{1}.fmri_spec.fact = struct('name', {}, 'levels', {});
    jobs{1}.stats{1}.fmri_spec.bases.hrf = struct('derivs', [0 0]);
    jobs{1}.stats{1}.fmri_spec.volt = 1;
    jobs{1}.stats{1}.fmri_spec.global = 'None';
    jobs{1}.stats{1}.fmri_spec.mask = {''};
    jobs{1}.stats{1}.fmri_spec.cvi = 'AR(1)';
    
    %Navigate to output directory, specify and estimate GLM
    cd(outputDir);
    spm_jobman('run', jobs)
    
    if ESTIMATE_GLM == 1
        load SPM;
        spm_spm(SPM);
    end
    
end

%Uncomment the following line if you want to debug
%keyboard



How to Fake Data and Make Tons of Money, Part 2: CreateNIFTI.m

After you've been working in science long enough, you may start to discover that the results that you get aren't necessarily the ones that you want. This discrepancy is an abomination, and clearly must be eliminated. One way to do this - at least with FMRI data - is to manually read in a dataset, and overwrite existing values with new ones.

While you can overwrite values in any dataset, I find it helpful to first create a blank dataset that has the same dimensions and orientation as the other data that you are working with. For example, by creating a copy of an existing image and then switching all the values in that dataset to zero. Starting from the ANALYZE files (i.e., .img/.hdr), you will need to convert them to NIFTI before you can use the script; I use AFNI's 3dcopy and then 3dAFNItoNIFTI to do this.

Once you have copied your file, you can zero out the values by using the script createBlankNIFTI.m and then fill in new values using createNIFTI.m. I'm sure these can both be combined somehow in the future, but there isn't a terribly high demand for this capability yet, so I'll leave it as is.



Parameter Extraction with MarsBar

Marsbar, a region of interest (ROI) tool interfacing with SPM, is a swiss-army knife of programs for ROI manipulation and data extraction. The most commonly used features of Marsbar are 1) The creation of ROIs from spheres or boxes centered on specified coordinates, and 2) The extraction of parameter or contrast estimates from ROIs. The following video tutorial focuses on the latter, in which parameter estimates for each subject are dumped out from a defined ROI.

For example, say you have two ROIs placed in distinct locations, and you wish to extract parameter estimates from the contrast A-B from each of those ROIs. Marsbar can do this easily, even flippantly, such a saucy and irreverent child it is. After your ROIs have been created, simply specify the SPM design you wish to extract parameter estimates from. In the case of second-level analyses, the SPM.mat files generated by these analyses will contain a number of time points equal to the number of subjects that went into that analysis; where Marsbar comes in is taking all of the parameter estimates for each subject and averages them over the entire ROI, generating a list of averaged parameter values for each subject.

Once this is done, save the results to a .mat file, load the file into memory, and check the output of SPM.marsY.Y (as in, "Why, Black Dynamite? Why?").


More deets can be found in the following tutorial; for a text-based walkthrough, complete with pictures, check out this link. I believe that both of these approaches are valid with both SPM5 and SPM8 distributions; if not, I apologize.

Unlike when Black Dynamite was denied the chance to apologize for the life he took so needlessly.




SPM Jobman


Now that we have created our own .mat files from the SPM GUI and seen how it can be written to the disk, altered, and reloaded back into SPM, the hour is at hand for using the command spm_jobman. This is a command for those eager to disenthrall themselves from the tyranny of graphical interfaces through batching SPM processes from the command line.

I first met spm_jobman - also known as Tim - a few weeks ago at a conference, when I was at the nadir of my sorrows, despairing over whether I would ever be able to run SPM commands without the GUI. Suddenly, like a judge divinely sent in answer to the lamentations of the oppressed, spm_jobman appeared by my side, trig and smartly dressed, and said he would be more than happy to help out; and from my first impression of his bearing and demeanor, I believed I was in the presence of an able and reliable ally. Anyone who has ever met spm_jobman, I believe, has felt the same thing. However, as I learned too late, far from being a delight, he is a charmless psychopath; and he continues to infect my dreams with nameless horrors and the unrelenting screams of the abattoir.

spm_jobman has three main options to choose from: Interactive, serial, and run. After choosing one of these options, for the second argument you enter your jobs structure, which is automatically populated after loading the .mat file from the command line. Interactive will load the traditional GUI with the options filled in from the jobs structure, which you can then modify and execute as you please; Serial will prompt the user to fill in each field, with the defaults set to the values in the jobs structure; and Run will execute the jobs structure without cuing the GUI. For most purposes, if you decide to run spm_jobman at all, you will want to use the Run command, as this allows you to loop processes over subjects without pause, allowing you to do more useful tasks, such as Googling the history of the lint roller.

Saving .mat files from SPM is immensely helpful in understanding the relationship between the .mat files created by SPM, and what exactly goes into them; and this will in turn reinforce your understanding of and ability to manipulate Matlab structures. The following tutorials show how the .mat file is generated from the SPM interface, which can then be used as a template for spm_jobman. I've been working with SPM for years now, but found out about this only recently; and I hope that it helps ease the burden of your SPM endeavors.



Checking Image Registration

Visually checking your image registration - in other words, the overlap between images aligned in a common space - is one of the staples of any FMRI analysis pipeline. Sadly, although everyone says that you should do it, not many people go through the trouble of looking of visually inspecting image overlap; even though, in my opinion, I think that checking your image registration is one of the sexiest things you can do. "Sexier than brushing your teeth at least once a week?" Not quite, but we gettin' there!


Example of faulty image registration. Each set of orthogonal views represents a single subject's mask after first-level analysis. The orthogonal views highlighted with the yellow rectangle has suffered - terribly - from an error in image registration during preprocessing, which should be inspected further before doing a higher-level analysis.
In order to increase your attractiveness, I've written up a brief script - possibly my masterpiece - which will allow you to easily check the registration of a specified image. For example, you may want to check the masks for a group of subjects to make sure that they overlap, as a single mask which is far different from the others will lead to a truncated group mask. While not necessarily unsexy, missteps like these will only lead to average attractiveness. In other words, there's not anything really wrong with you, and there might be plenty of people who would settle for you, but...meh.


More details, including a demonstration of the script in action, can be seen in the following video.


John's Gems #5: Mesh Plots of T-Maps

For those of us who wish to act like we know more than we actually do, John Ashburner has a webpage of useful code snippets that allow you to do cool - maybe even useful - operations on your data. By using these gems, others may start to think that you are also cool - and maybe even useful.

Here I merely replicate John's Gem #5: Mesh Plots of T-maps. (Because coming up with something original on my own would be, like, you know, difficult.) While I cannot immediately think of a practical utility for this function, it does provide an interesting visualization of t-statistics as viewed through a transverse slice of the brain, as shown in the following figures:

Left: Transverse slice of a T-map seen from above. Right: Same image viewed at an angle. Note that the "hotter" spots are raised above the yellow plane threshold representing a t-score of 0, while "colder" spots deflect below the plane.

I recommend checking out more of the code snippets, as they provide insight into how the professional neuroimaging programmers approach problems. By immersing yourself within their thoughts, it will have a salutary effect on your thoughts as well, cleansing your muddled mind and training you to become focused, forceful, and direct. At least, that's what I think might happen.


Region of Interest Analysis


Before we get down to regions of interest, a few words about the recent heat wave: It's taken a toll. The past few days I've walked out the door and straight into a slow broil, that great yellowish orb pasted in the cloudless sky like a sticker, beating down waves of heat that saps all the energy out of your muscles. You try to get started with a couple miles down the country roads, crenelated spires of heat radiating from the blacktop, a thin rime of salt and dust coating every inch of your skin, and realize the only sound in this inferno is the soles of your shoes slapping the cracked asphalt. Out here, even the dogs have given up barking. Any grass unprotected by the shade of nearby trees has withered and died, entire lawns turned into fields of dry, yellow, lifeless straw. Flensed remains of dogs and cattle and unlucky travelers lie in the street, bones bleached by the sun, eyeless sockets gazing skyward like the expired votaries of some angry sun god.

In short, it's been pretty brutal.

Regions of Interest

Region of Interest (ROI) analysis in neuroimaging refers to selecting a cluster of voxels or brain region a priori (or, also very common, a posteriori) when investigating a region for effects. This can be done either by creating a small search space (typically a sphere with a radius of N voxels), or based on anatomical atlases available through programs like SPM or downloadable from web. ROI analysis has the advantage of mitigating the fiendish multiple comparisons problem, in which a search space of potentially hundreds of thousands of voxels is reduced to a smaller, more tractable area, thus reducing overly stringent multiple comparisons correction thresholds. At first glance this makes sense, given that you may not be interested in a whole brain analysis (i.e., searching for activation in every single voxel in the entire volume); however, it can also be abused to carry out confirmatory analyses after you have already determined where a cluster of activation is.

Simple example: You carry out a whole brain analysis, and find a cluster of fifty voxels extending over the superior frontal sulcus. This is not a large enough cluster extent to pass cluster correction at the whole brain level, but you go ahead anyway and perform an additional ROI analysis focused on the center of the cluster. There are not any real safeguards against this measure, as it is impossible to know what the researcher had in mind when they conducted the test. For instance, what if an investigator happened to simply make a mistake and see the results of a whole brain analysis before doing an ROI analysis? Pretend that he didn't see them? These are questions which may be addressed in a future post about a Bayesian approach to fMRI, but for now, be aware that there exists significant potential for misuse of this technique.

Additional Considerations


Non-Independence
Colloquially known as "double-dipping," non-independence has become an increasingly important issue over the years as ROI analyses have become more common (see Kriegeskorte et al, 2009, 2010).  In order to avoid biasing an ROI toward certain regressors, it is essential that the ROI and the contrast of interest share no common regressors. Consider a hypothetical experiment with three regressors: A, B, and C.  The contrast A-B is used to define an ROI, and the experimenter then decides to test the contrast of A-C within this ROI.  As this ROI is already biased toward voxels that are more active in response to regressor A, this is a biased contrast to conduct. This is not endemic only to fMRI data, but applies to any other statistical comparison where bias is a potential issue.

Correction for Multiple ROI Analyses
Ideally, each ROI analysis should be treated as an independent test, and should be corrected for multiple comparisons.  That is, assuming that an investigator is agnostic about where the activation is to be found, the alpha threshold for determining significance should be divided by the amount of tests conducted.  While this is probably rarely done, it is good practice to have a priori hypotheses about where activation is to be found in order to avoid these issues of multiple comparisons.

ROI Analysis in AFNI

Within AFNI, there exists a useful program called 3dUndump which requires x, y, and z coordinates (in millimeters), radius size of the sphere, and the master dataset where the sphere will be applied. A typical command looks like:

3dUndump -prefix (OutputDataset) -master (MasterDataset) -srad (Radius of Sphere, in mm) -xyz (X, Y, and Z coordinates of sphere)


One thing to keep in mind is the orientation of the master dataset. For example, the standard template that AFNI warps has a positive to negative gradient when going from posterior to anterior; in other words, values in the Y-direction will be negative when moving forward of the anterior commissure. Thus, it is important to note the space and orientation of the coordinates off of which you are basing your ROI, and make sure it matches up with the orientation of the dataset you are applying the ROI to. In short, look at your data after you have generated the ROI to make sure that it looks reasonable.


The following is a short Python wrapper I made for 3dUndump. Those already familiar with using 3dUndump may not find much use in it, but for me, having an interactive prompt is useful:



#!usr/bin/env python


import os
import math
import sys


#Read in Raw user input, assign to variables
print("MakeSpheres.py")
print("Created by Andrew Jahn, Indiana University 03.14.2012")
prefix = raw_input("Please enter the output filename of the sphere: ")
master = raw_input("Please enter name of master dataset (e.g., anat_final.+tlrc): ")
rad = raw_input("Please enter radius of sphere (in mm): ")
xCoord = raw_input("Please enter x coordinate of sphere (MNI): ")
yCoord = raw_input("Please enter y coordinate of sphere (MNI): ")
zCoord = raw_input("Please enter z coordinate of sphere (MNI): ")


#Make string of coordinates (e.g., 0 36 12)
xyzString = xCoord + " " + yCoord + " " + zCoord
printXYZString = 'echo ' + xyzString + ' > sphere_' + rad + 'mm_'+xCoord+'_'+yCoord+'_'+zCoord+'.txt'
os.system(printXYZString) #prints xyzstring to filename given above


#Will need sphere file in this format for makeSpheres function
xyzFile = 'sphere_' + rad + 'mm_'+xCoord+'_'+yCoord+'_'+zCoord+'.txt'


def makeSpheres(prefix, master, rad, xyz ):
cmdString = '3dUndump -prefix '+prefix+ ' -master '+master+' -srad '+rad+' -xyz '+xyz 
os.system(cmdString)
return


makeSpheres(prefix=prefix, master=master, rad=rad, xyz=xyzFile)




Which will generate something like this (Based on a 5mm sphere centered on coordinates 0, 30, 20):


Beta values, time course information, etc., can then be extracted from within this restricted region.

ROI Analysis in SPM (Functional ROIs)

This next example will focus on how to do ROI analysis in SPM through MarsBar, a toolbox available here if you don't already have it installed. In addition to walking through ROI analysis in SPM, this will also serve as a guide to creating functional ROIs. Functional ROIs are based on results from other contrasts or interactions, which ideally should be independent of the test to be investigated within that ROI; else, you run the risk of double-dipping (see the "Non-Independence" section above).

After installation, you should see Marsbar as an option in the SPM toolbox dropdown menu:

1. Extract ROIs

After installing Marsbar, select it from the toolbox dropdown menu.  After Marsbar boots up, click on the menu “ROI Definition”.  Select “Get SPM clusters”.


This will prompt the user to supply an SPM.mat file containing the contrast of interest.  Select the SPM.mat file that you want, and click “Done”.

Select the contrast of interest just as you would when visualizing any other contrast in SPM.  Select the appropriate contrast and the threshold criteria that you want.




When your SPM appears on the template brain, navigate to the cluster you wish to extract, and click “current cluster” from the menu, underneath “p-value”.  Highlight the cluster you want in the SPM Results table.  The highlighted cluster should turn red after you select it.



Navigate back to the SPM results menu, and click on “Write ROIs”.  This option will only be available if you have highlighted a clutter.  Click on “Write one cluster”.  You can enter your description and label for the ROI; leaving these as default is fine.  Select an appropriate name for your ROI, and save it to an appropriate folder.




2. Export ROI

Next, you will need to convert your ROI from a .mat file to an image file (e.g., .nii or .img format).  Select “ROI Definition” from the Marsbar menu and then “Export”.



You will then be presented with an option for where to save the ROI. 



Select the ROI you want to export, and click Done.  Select the directory where you wish to output the image.  You should now see the ROI you exported, with a .nii extension.

3. ROI Analysis

You can now use your saved image for an ROI analysis.  Boot up the Results menu as you would to normally look at a contrast, but when prompted for “ROI Analysis?”, select “Yes”. You can select your ROI from the “Saved File” option; alternatively, you can mask activity based on atlas parcellations by selecting the “Atlas” option.



The constriction of search space will mean fewer multiple comparisons need to be corrected for, and thus increases the statistical power of your contrast.