AFNI Bootcamp: Day 3

Today, Bob began with an explanation for how AFNI’s amplitude modulation (cf. SPM’s parametric modulation) differs from other software approaches. For one, not only are the estimates for each parametric modulation computed, but so is the estimate of the beta itself. This leads to estimates of what variance can be explained by the parametric modulators, above and beyond the beta itself. Then, the beta estimates for those parametric modulators can be carried to the second level, just like any other parameter estimate.

To give a concrete example, take an experiment that presents the subject with a gamble that varies on three specific dimensions: probability of win, magnitude of win, and the variance of the gamble. Let us say that the onset of the gamble occurred at 42 seconds into the run, and that the dimensions were 0.7, 10, and 23. In this case, the onset time of the gamble would be parametrically modulated by these dimensions, and would be represented in a timing file as 42*0.7,10,23.  [Insert AM_Example.jpg here]. Notice that the resulting parametric modulators are mean-centered, here resulting in negative values for probability and variance. The purpose of the amplitude modulation is to see what proportion of the variance in the BOLD response is due to these additional variables driving the amplitude of the BOLD signal; if it is a particularly good fit, then the resulting t-statistic for that beta weight will be relatively high.

Regarding this, 3dREMLfit was mentioned yet again, as Bob pointed out how it takes into account both the beta estimate and the variance surrounding that estimate (i.e., the beta estimate’s associated t-statistic). A high beta estimate does not necessarily imply a high t-statistic, and vice versa, which is why it would make sense to include this information at the group level. However, none of the AFNI developer’s that I talked to definitively stated that 3dMEMA or the least-squares method was preferable; that is entirely up to the user. I examined this with my own data, looking at a contrast of two regressors at the 2nd-level using both OLSQ and 3dMEMA. As the resulting pictures show, both methods show patterns of activation in the rostral ACC (similar to what I was getting with SPM), although 3dMEMA produces an activation map that passes cluster correction, while OLSQ does not. Which should you use? I don’t know. I suppose you can try both, and whatever gives you the answer that you want, you should use. If you use 3dMEMA and it doesn’t give you the result that you want, you can just claim that it’s too unreliable to be used just yet, and so make yourself feel better about using a least-squares approach.

After a short break, Ziad discussed AFNI’s way of dealing with resting state data via a program called RETROICOR. I have no idea what that stands for, but it accounts for heart rate and respiration variability for each subject, which is critically important when interpreting a resting state dataset. Because the relationship between physiological noise –especially heart rate – and the BOLD signal is poorly understood, it is reasonable to covary this activation out, in order to be able to claim that what you are looking at is true differences in neural activation between conditions or groups or whatever you are investigating (although using the term “neural activation” is a bit of a stretch here). Apparently this is not done that often, and neither is accounting for motion, which can be a huge confound as well.  All of the confounds listed above can lead to small effect sizes biased in a certain direction, but can do so consistently, leading to significant activation at the group level that has nothing to do with the participant’s true resting state (although again, the term “resting state” is a bit of a stretch, since you have no idea what the participant is doing, and no tasks to regress to explain the timeseries data). In any case, this is an area I know very little about, but the potential pitfalls seem serious enough to warrant staying away from this unless I have a really good reason for doing it.