Today was a walkthrough of the AFNI interface, and a tutorial for how to view timeseries data and model fits. Daniel started off with a showcase of AFNI’s ability to graph timeseries data for each stage of preprocessing, and how it changes as a result of each step. For example, after scaling the raw MR signal to a percentage, the values at each TR in the timeseries graph begin to cluster around the value of 100. This is a totally arbitrary number, but allows one to make inferences about percent signal change, as opposed to simply parameter estimates. Since this percent signal change is done for each voxel, as opposed to grand mean scaling in SPM which divides each voxel’s value by the mean signal intensity across the entire brain, it becomes more reasonable to talk in terms of percent signal change at each voxel.
Another cool feature is the ability to overlay the model fits produced by 3dDeconvolve on top of the raw timeseries. This is especially useful when looking all over the brain to see how different voxels correlate with the model (although this may be easier to see with a block design as opposed to an event-related design). You can extract an ideal time series from the X matrix output by 3dDeconvolve by using 1dcat [X Matrix column] > [output 1D file], and then overlay one or more ideal timeseries by clicking on Graph -> 1dTrans -> Dataset#. (Insert picture of this here).
One issue that came up when Ziad was presenting, was the fact that using dmBLOCK as a basis function to convolve an onset with a boxcar function does not take individual scaling into account. That is, if one event lasts six seconds, and another lasts ten seconds, they will be scaled by the same amount, although in principal they should be different, as saturation has not been achieved yet. I asked if they would fix this, and they said that they would, soon. Fingers crossed!
Outside of the lectures, Bob introduced me to AlphaSim’s successor, ClustSim. For those who haven’t used it, AlphaSim calculates how many contiguous voxels at a you need at a specified uncorrected threshold in order to pass a corrected cluster threshold. That is, AlphaSim runs several thousand simulations of white noise, and calculates the extent of uncorrected voxels that would appear at different levels of chance. ClustSim does the same thing, except that it is much faster, and can calculate several different corrected thresholds simultaneously. The uber scripts call on ClustSim to make these calculations for the user, and then write this information into the header of the statistics datasets. You can see the corrected cluster thresholds for each cluster under the “Rpt” button of the statistics screen.
On a related note, ClustSim takes into account smoothing that was done by the scanner before any preprocessing. I had no idea this happened, but apparently the scanners are configured to introduced very low-level (e.g., 2mm) smoothing into each image as it is output. Because of this, the uber scripts estimate the average amount of smoothness across an entire subject in the x, y, and z directions, which are not always the same. Therefore, if you used a smoothing kernel of 4mm, your estimated smoothness may be closer to 6mm. This is the full width at half max that should be used when calculating cluster correction levels in AlphaSim or ClustSim. Another tidbit I learned is that Gaussian Random Fields (SPM’s method of calculating cluster correction) is “difficult” at smoothing kernels less than 10mm. I have no idea why, but Bob told me so, so I treat it as gospel. Also, by “difficult”, I mean that it has a hard time finding a true solution to the correct cluster correction level.
I found out that, in order to smooth within a mask such as grey matter, AFNI has a tool named 3dBlurInMask for that purpose. This needs to be called at the smoothing step, and replaces 3dmerge or 3dFWHMx, whichever you are using for smoothing. This sounds great in theory, since most of the time we are smoothing both across white matter and a bunch of other crap from outside the brain which we don’t care about. At least, I don’t care about it. The only drawback is that it suffers from the same problem as conventional smoothing, i.e. that there is no assurance of good overlap between subjects, and the resulting activation may not be where it was at the individual level. Still, I think it is worth trying out.
The last person I talked to was Gang Chen, the statistician. I asked him whether AFNI was going to implement a Bayesian inference application anytime soon, for parameter estimation. He told me that such an approach was unfeasible at the voxel level, as calculating HDIs are extremely computationally intensive (just think of how many chains, samples, thinning, etc, and then multiply that by tens of thousands of individual tests). Although I had heard that FSL uses a Bayesian approach, this isn’t really Bayesian; it is essentially the same as what 3dMEMA does, which is to weight high-variability parameter estimates less than high-precision parameter estimates. Apparently a true-blue Bayesian approach can be done (at the second level), but this can take up to several days. Still, it is something I would like to investigate more, and to compare results from AFNI to FSL’s Bayesian method, and see if there is any meaningful difference between the two.