Slice Timing Correction in SPM

I have posted a couple new videos about slice-timing correction in SPM: One from the GUI, and the other from the command line. The command line usage is more interesting and informative, especially if you aim to batch your preprocessing without using the graphical user interface; and this will be the goal of this series of tutorials.

And just imagine - no more mindless pointing and clicking. No more sore wrists and carpal tunnel syndrome from those long nights of copying and pasting onset times, one after the other, subject after subject, until your mind becomes so warped that you accidentally end up copying and pasting a particularly spicy, shockingly personal, yet oddly poetic missive sent to an ex-girlfriend after quaffing one too many Smirnoff Ices, which then ends up estimating a general linear model of your pathetic and utter wretchedness. Obviously, this analysis will go into the supplementary materials.

To avoid this, slice timing can instead be called by the program spm_slice_timing, which requires the following arguments:

P - A list of files to slice time correct (can select these using spm_select)
sliceOrder - Slice acquisition order
refslice - Reference slice for time zero
timing - requires two arguments: 1) time between slices; and 2) time between last slice and next volume

sliceOrder can be assigned with a Matlab concatenation command. For example, if the slices were acquired in an interleaved order starting with slice 1, and there were 35 slice total, the slice order could be written like this:

sliceOrder = [1:2:35 2:2:35];

Which would return a list of numbers going from 1 to 35 by steps of 2, and then going back and concatenating this with a list of numbers from 2 to 35 by steps of 2.

The timing variable is easy to fill in once you have both the TR and the TA variables. TR is your repetition time - for example, 2 seconds between volumes. TA is defined as TR-(TR/(num. of slices)), which in this case would be 2-(2/35) ~ 1.94. This is the time at which the last slice was acquired; and, since the first slice was acquired at time 0, the time between each slice can be calculated as TA/nSlices, e.g. 1.94/(nSlices-1) = 1.94/34 ~ 0.057 (not significant, but trending towards it). Likewise, the value for the second field can be calculated as TR-TA, which also equals about 0.057. If the variables TR and TA have already been assigned values, then the fields of the timing variable can be filled up:

timing(1) = TA/nslices;
timing(2) = TR-TA;

With this in hand, spm_slice_timing can be filled in as follows:

spm_slice_timing(spm_select('List', pwd, '^r01.nii'), [1:2:35 2:2:35], 0, timing)

Both versions of slice timing correction can be found in the following tutorials:

The GUI version of slice timing correction. This is for little kids and grandmas.

Now we're talking; this is the real sh*t. Henceforth shall you be set on the path towards nerd glory, and your exploits shall be recorded in the blank verse epic, Childe Roland to the Nerd Tower Came.

Slice Timing Correction

fMRI suffers from the disease of temporal uncertainty. The BOLD response is sluggish and unreliable; cognitive processes are variable and are difficult to model; and each slice of a volume is acquired at a different time. This last symptom is addressed by slice-timing correction (STC), which attempts to shift the data acquired at each slice in order to align them at the same time point. Without it, all would be lost.

Figure stolen from Sladky et al (2011): Assuming that the BOLD response is roughly equivalent across slices, each successive slice samples a different timepoint. STC rectifies this by interpolating what the value at time zero would have been if all slices had been acquired simultaneously.

"Madness!" you cry; "How can we know what happened at a time point that was not directly measured? How can we know anything? Is what I perceive the same as what everybody else perceives?" A valid criticism, but one that has already been hunted down and crushed by temporal interpolation - the estimation of a timepoint by looking at its neighbors. "But how reliable is it? Will the timecourse not be smoothed by simply averaging the neighboring points?" Then use a higher-order interpolation, whelp, and be silent.

The merits of STC have been debated, as well as when it should be used in the preprocessing stream. However, it is generally agreed that STC should be included in order to reduce estimation bias and increase sensitivity (Sladky et al, 2011; Calhoun et al, 2000; Hensen et al, 1999), and that it should occur before volume coregistration or any other spatial interpolations of the data. For example, consider a dataset acquired at an angle from the AC/PC line (cf. Deichmann et al, 2004): If STC is performed after realigning the slices to be parallel to the AC/PC line, then the corresponding slices for each part of the brain are altered and temporal interpolation becomes meaningless; that way lies darkness and suffering.

If unnecessary interpolations offend your sensibilities, other options are available, such as incorporating temporal derivatives into your model or constructing regressors for each slice (Hensen et al, 1999). However, standard STC appears to be the most straightforward approach and the lowest-maintenance relative to the other options.

Slice-Timing Correction in AFNI is done through 3dTshift. Supply it with the following:

  1. The slice you wish to align to (usually either the first, middle, or last slice);
  2. The sequence in which the slices are acquired (ascending, descending, sequential, interleaved, etc.);
  3. Preferred interpolation (the higher-order, the better, with Fourier being the Cadillac of interpolation methods); and
  4. Prefix for your output dataset.

Sample command:
3dTshift -tzero 0 -tpattern altplus -quintic -prefix tshift [[input dataset goes here]]

More details, along with an interactive example of how STC works, can be found in the following tutorial video.

FSL Tutorial 2: FEAT (Part 1)

A new tutorial about FEAT is now up; depending on how long it takes to get through all of the different tabs in the interface, this may be a three-part series. In any case, this will serve as a basic overview of the preprocessing steps of FEAT, most of which can be left as a default.

The next couple of tutorials will cover the set up of models and timing files within FSL, which can be a little tricky. For those of you who have stuck with it from the beginning (and I have heard that there are a few of you out there: Hello), there will be some more useful features coming up, aside from reviewing the basics.

Eventually we will get around to batch scripting FEAT analyses, which can save you several hours of mindless pointing and clicking, and leave you with plenty of time to watch Starcraft 2 replays, or Breaking Bad, or whatever it is kids watch these days.