DTI Analysis: Soup to Nuts Playlist




Instead of going through each DTI analysis step individually, I've collated everything into a Youtube playlist down below. Just remember that we are using data from the FSL practical course here, and also remember that suing somebody for giving out bad advice, although it is admittedly an easy way to become fantastically wealthy, won't necessarily make you happier.

In any case, just to briefly go over the rest of the steps: After correcting for magnetic field distortions and eddy currents, tensors are fitted using the dtifit command (or simply going through the FDT menu in the FSL interface). Once this has been done for each subject, a series of TBSS tools are used, each one prefixed by "tbss"; for example, tbss_1_preproc, tbss_2_reg, and so on. (You can find all of these in the $FSLDIR/bin directory, and if you have a good handle on Unix programming, you can inspect the code yourself.) After you have run all of those for your dataset, you set up the appropriate experimental design and contrasts, and use the randomise tool to perform statistics in your tractography mask.

Keep in mind that this is just beginner's material; and that if you were to compare your DTI competence to dating, if would be like you were still in that awkward teenager phase, unable to talk to anybody or make eye contact.

However, much of this material has already been covered in other blogs and online resources, provided by several other highly talented scientists and researchers, and - as much as I constantly fantasize about establishing a monopoly over neuroimaging information - there is no practical way to destroy them all.

Therefore, instead of posting redundant information, I highly recommend checking out an ongoing step-by-step series on TORTOISE by Peter Molfese, which you can compare to your results with FSL, and another blog dedicated to diffusion imaging, diffusion-imaging.com. The latter site covers virtually every piece of diffusion imaging software and pipeline out there, and is a good place to start.


Beta Series Analysis in SPM

The following is a script to run beta series analysis in SPM; which, conceptually, is identical to beta series analysis in AFNI, just made more complicated by the use of Matlab commands, many of which I would otherwise remain totally ignorant of.

Also, if you believe that I did this on my own, you are a scoundrel and a fool. Virtually all of the code is from my colleague Derek Nee, a postdoc who used to work in my lab and who remains my superior in all things, scientific and otherwise. He also once got me with the old "If you say gullible slowly, it sounds like orange" joke.

In any case, all that I did was repackage it into a function to be more accessible by people outside of our lab. You may see a couple snippets of code that are useful outside of just doing beta series analysis; for example, the spm_get_data command, which can be used for parameter extraction from the command line without using marsbar. Also note that the script assumes that each trial for a condition has been labeled separately with the regressor name followed by a "_" and then the occurrence of the trial. So, for example, if I were looking at 29 instances of pressing the left button, and the regressor name was LeftButtonPress, I would have LeftButtonPress_1, LeftButtonPress_2, LeftButtonPress_3...LeftButtonPress_29.

Obviously, this is very tedious to do by hand. The Multiple Conditions option through the SPM GUI for setting up 1st level analyses is indispensable; but to cover that is another tutorial, one that will probably cross over into our discussion of GLMs.


The script:

function DoBetaSeries(rootdir, subjects, spmdir, seedroi, Conds, MapNames)
% function BetaSeries(rootdir, subjects, spmdir, seedroi, Conds, MapNames)
% INPUT:
%  rootdir  - Path to where subjects are stored
%  subjects - List of subjects (can concatenate with brackets)
%  spmdir   - Path to folder containing SPM file
%  seedroi  - Absolute path to file containing ROI in NIFTI format
%  Conds    - List of conditions for beta series analysis
%  MapNames - Output for list of maps, one per Conds cell
%       
%   Example use:
%       BetaSeries('/data/study1/fmri/', [101 102 103],
%       'model/BetaSeriesDir/', '/data/study1/Masks/RightM1.nii',
%       {'TapLeft'}, {'TapLeft_RightM1'})
%
%       conditions to be averaged together should be placed together in a cell
%       separate correlation maps will be made for each cell
%       Conds = {'Incongruent' 'Congruent' {'Error' 'Late'}}; 
%
%       For MapNames, there should be one per Conds cell above
%       e.g., with the Conds above, MapNames = {'Incongruent_Map',
%       'Congruent_Map', 'Error_Late_Map'}
%
%       Once you have z-maps, these can be entered into a 2nd-level
%       1-sample t-test, or contrasts can be taken between z-maps and these
%       contrasts can be taken to a 1-sample t-test as well.
%
% 

if nargin < 5
 disp('Need rootdir, subjects, spmdir, seedroi, Conds, MapNames. See "help BetaSeries" for more information.')
    return
end


%Find XYZ coordinates of ROI
Y = spm_read_vols(spm_vol(seedroi),1);
indx = find(Y>0);
[x,y,z] = ind2sub(size(Y),indx);
XYZ = [x y z]';


%Find each occurrence of a trial for a given condition
%These will be stacked together in the Betas array
for i = 1:length(subjects)
    subj = num2str(subjects(i));
    disp(['Loading SPM for subject ' subj]);
    %Can change the following line of code to CD to the directory
    %containing your SPM file, if your directory structure is different
    cd([rootdir subj filesep spmdir]);
    load SPM;
    
    for cond = 1:length(Conds)
        Betas = [];
        currCond = Conds{cond};
        if ~iscell(currCond)
            currCond = {currCond};
        end
        for j = 1:length(SPM.Vbeta) 
            for k = 1:length(currCond)
                if ~isempty(strfind(SPM.Vbeta(j).descrip,[currCond{k} '_'])) 
                    Betas = strvcat(Betas,SPM.Vbeta(j).fname);
                end
            end
        end
              

    %Extract beta series time course from ROI
    %This will be correlated with every other voxel in the brain
    if ischar(Betas)
        P = spm_vol(Betas);
    end

    est = spm_get_data(P,XYZ);
    est = nanmean(est,2);

        
        
        %----Do beta series correlation between ROI and rest of brain---%

            MapName = MapNames{cond};
            disp(['Performing beta series correlation for ' MapName]);
            
            Vin = spm_vol(Betas);
            nimgo = size(Vin,1);
            nslices = Vin(1).dim(3);

            % create new header files
            Vout_r = Vin(1);   
            Vout_p = Vin(1);
            [pth,nm,xt] = fileparts(deblank(Vin(1).fname));
            Vout_r.fname = fullfile(pth,[MapNames{cond} '_r.img']);
            Vout_p.fname = fullfile(pth,[MapNames{cond} '_p.img']);

            Vout_r.descrip = ['correlation map'];
            Vout_p.descrip = ['p-value map'];

            Vout_r.dt(1) = 16;
            Vout_p.dt(1) = 16;

            Vout_r = spm_create_vol(Vout_r);
            Vout_p = spm_create_vol(Vout_p);

            % Set up large matrix for holding image info
            % Organization is time by voxels
            slices = zeros([Vin(1).dim(1:2) nimgo]);
            stack = zeros([nimgo Vin(1).dim(1)]);
            out_r = zeros(Vin(1).dim);
            out_p = zeros(Vin(1).dim);


            for i = 1:nslices
                B = spm_matrix([0 0 i]);
                %creates plane x time
                for j = 1:nimgo
                    slices(:,:,j) = spm_slice_vol(Vin(j),B,Vin(1).dim(1:2),1);
                end

                for j = 1:Vin(1).dim(2)
                    stack = reshape(slices(:,j,:),[Vin(1).dim(1) nimgo])';
                    [r p] = corr(stack,est);
                    out_r(:,j,i) = r;
                    out_p(:,j,i) = p;

                end

                Vout_r = spm_write_plane(Vout_r,out_r(:,:,i),i);
                Vout_p = spm_write_plane(Vout_p,out_p(:,:,i),i);

            end

                
            %Convert correlation maps to z-scores
            %NOTE: If uneven number of trials in conditions you are
            %comparing, leave out the "./(1/sqrt(n-3)" term in the zdata
            %variable assignment, as this can bias towards conditions which
            %have more trials
            
                disp(['Converting correlation maps for subject ' subj ', condition ' MapNames{cond} ' to z-scores'])
                P = [MapNames{cond} '_r.img'];
                n = size(Betas,1);
                Q = MapNames{cond};
                Vin = spm_vol([MapNames{cond} '_r.img']);

                % create new header files
                Vout = Vin;   

                [pth,nm,xt] = fileparts(deblank(Vin(1).fname));
                Vout.fname = fullfile(pth,[Q '_z.img']);

                Vout.descrip = ['z map'];

                Vout = spm_create_vol(Vout);

                data = spm_read_vols(Vin);
                zdata = atanh(data)./(1/sqrt(n-3));

                spm_write_vol(Vout,zdata);

    end
end



This can either be copied and pasted into a script, or downloaded as a .m file here.




Creating Functional Connectivity Maps in AFNI (Second Step on the Road Towards Glory)

Having used 3dSynthesize to create a dataset with the effects of no interest, we are now ready to subtract those effects from the dataset that went into 3dDeconvolve; for you players using the AFNI_data6 tutorial data, this is the all_runs.FT+orig dataset. We can use 3dcalc for this operation:

3dcalc -a all_runs.FT+orig -b effectsNoInterest+orig -expr 'a-b' -prefix cleanData+orig

This will create a time series with all the drift correction and motion parameter schmutz subtracted out and cast into the pit. What is left over will be signal untainted by motion or drift effects, which we can then use to generate a seed time series.

Before that, however, there is the additional step of warping our data to a template space (such as MNI or Talairach). Assuming you have already warped your anatomical dataset to a template space either manually or using @auto_tlrc, and assuming that there is already good alignment between your anatomical and functional data, you can use the command adwarp:

adwarp -apar anat_final.FT+tlrc -dpar cleanData+orig -dxyz 3 -prefix EPI_subj_01

(Note: There are better ways to do warping, such as the warping option built into align_epi_anat.py (which calls upon @auto_tlrc), but for pedagogical purposes we will stick with adwarp.)

Once this is done, all we need to do is select a seed region for our functional connectivity analysis. This could be a single voxel, a region of interest that averages over an entire area of voxels, or a region selected based on a contrast or an anatomical landmark; it depends on your research question, or whether you want to mess around and do some exploratory analyses. For now, we will use a single voxel using XYZ coordinates centered on the left motor cortex (note that, in this coordinate system, left is positive and posterior is positive):

3dmaskdump -noijk -dbox 20 19 53 EPI_subj_01+tlrc > ideal_ts.1D

Because 3dmaskdump outputs a single row, we will need to transpose this into a column:

1dtranspose ideal_ts.1D LeftMotorIdeal.1D

After you have output your ideal timeseries, open up the AFNI viewer and make sure that what was output matches up with the actual timeseries. In this example, set EPI_subj_01+tlrc as an underlay and right click in the slices viewer to jump to XYZ coordinates of 20, 19, 53; open up the timeseries graph as well, and scroll to the first time point. Does the value there match up with the first entry of your ideal time file? If not, you may have mis-specified the voxel you wanted, or you are using a different XYZ grid than is displayed in the upper left corner of the AFNI viewer. If you run into this situation, consult your local AFNI guru; if you don't have one of those, well...

In any case, we now have all the ingredients for the functional connectivity analysis, and we're ready to pull the trigger. Assemble your materials using 3dfim+ (which, as far as I can understand, stands for "functional intensity map," or something like that):

3dfim+ -polort 3 -input EPI_subj_01+tlrc -ideal_file LeftMotorIdeal.1D -out Correlation -bucket Corr_subj01

You will now have a dataset, Corr_subj_01+tlrc, which is a functional connectivity map between your seed region and the rest of the brain. Note that the threshold slider will now be in R values, instead of beta coefficients or t-statistics; therefore, make sure the order of magnitude button (next to the '**' below the slider) is 0, to restrict the range between +1 and -1.

The last step is to convert these correlation values into z-scores, to reduce skewness and make the distribution more normal. This is done through Fisher's R-to-Z transformation, given by the formula z =  (1/2) * ln((1+r)/(1-r)). Using 3dcalc, we can apply this transformation:

3dcalc -a Corr_subj01+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj01_Z+tlrc

Those Z-maps can then be entered into a second-level design just like any other, using 3dttest+ or some other variation.





Sanity check: When viewing the correlation maps, jump to your seed region and observe the r-value. It should be 1, which makes sense given that this was the region we were correlating with every other part of the brain, and correlating it with itself should be perfect.

Like me.