Introduction to Beta Series Analysis in AFNI

So far we have covered functional connectivity analysis with resting-state datasets. These analyses focus on the overall timecourse of BOLD activity for a person who is at rest, with the assumption that random fluctuations in the BOLD response that are correlated implies some sort of communication happening between them. Whether or not this assumption is true, and whether or not flying lobsters from Mars descend to Earth every night and play Scrabble on top of the Chrysler Building, is still a matter of intense debate.

However, the majority of studies are not resting-state experiments, but instead have participants perform a wide variety of interesting tasks, such as estimating how much a luxury car costs while surrounded by blonde supermodels with cleavages so ample you could lose a backhoe in there. If the participant is close enough to the actual price without going over, then he -

No, wait! Sorry, I was describing The Price is Right. The actual things participants do in FMRI experiments are much more bizarre, involving tasks such as listening to whales mate with each other*, or learning associations between receiving painful electrical shocks and pictures of different-colored shades of earwax. And, worst of all, they aren't even surrounded by buxom supermodels while inside the scanner**.

In any case, these studies can attempt to ask the same questions raised by resting-state experiments: Whether there is any sort of functional correlation between different voxels within different conditions. However, traditional analyses which average beta estimates across all trials in a condition cannot answer this, since any variance in that condition is lost after averaging all of the individual betas together.

Beta series analysis (Rissman, Gazzaley, & D'Esposito, 2004), on the other hand, is interested in the trial-by-trial variability for each condition, under the assumption that voxels which show a similar pattern of individual trial estimates over time are interacting with each other. This is the same concept that we used for correlating timecourses of BOLD activity while a participant was at rest; all we are doing now is applying it to statistical maps where the timecourse is instead a concatenated series of betas.

Figure 1 from Rissman et al (2004). The caption is mostly self-explanatory, but note that for the beta-series correlation, we are looking at the amplitude estimates, or peaks of each waveform for each condition in each trial. Therefore, we would starting stringing together betas for each condition, and the resulting timecourse for, say, the Cue condition would be similar to drawing a line between the peaks of the first waveform in each trial (the greenish-looking ones).

The first step to do this is to put each individual trial into your model; which, mercifully, is easy to do with AFNI. Instead of using the -stim_times option that one normally uses, instead use -stim_times_IM, which will generate a beta for each individual trial for that condition. A similar process can be done in SPM and FSL, but as far as I know, each trial has to be coded and entered separately, which can take a long time; there are ways to code around this, but they are more complicated.



Assuming that you have run your 3dDeconvolve script with the -stim_times_IM option, however, you should now have each individual beta for that condition output into your statistics dataset. The last preparation step is to extract them with a backhoe, or - if you have somehow lost yours - with a tool such as 3dbucket, which can easily extract the necessary beta weights (Here I am focusing on beta weights for trials where participants made a button press with their left hand; modify this to reflect which condition you are interested in):

3dbucket -prefix Left_Betas stats.s204+tlrc'[15..69(2)]'

As a reminder, the quotations and brackets mean to do a sub-brik selection; the ellipses mean to take those sub-briks between the boundaries specified; and the 2 in parentheses means to extract every other beta weight, since these statistics are interleaved with T-statistics, which we will want to avoid.



Tomorrow we will finish up how to do this for a single subject. (It's not too late to turn back!)



*Fensterwhacker, D. T. (2011). A Whale of a Night: An Investigation into the Neural Correlates of Listening to Whales Mating. Journal of Mongolian Neuropsychiatry, 2, 113-120.

**Unless you participate in one of my studies, of course.

Psychophysiological Interactions Explained

That last tutorial on psychophysiological interactions - all thirteen minutes of it - may have been a little too much to digest in one sitting. However, you also don't understand what it's like to be in the zone, with the AFNI commands darting from my fingertips and the FMRI information tumbling uncontrollably from my lips; sometimes I get so excited that I accidentally spit a little when I talk, mispronounce words like "particularly," and am unable to string two coherent thoughts together. It's what my dating coach calls the vibe.

With that in mind, I'd like to provide some more supporting commentary. Think of it as supplementary material - that everybody reads!

Psychophysiological interactions (PPIs) are correlations that change depending on a given condition, just as in statistics. (The concept is identical, actually.) For example: Will a drug affect your nervous system differently from a placebo depending on whether you have been drinking or not? Does the effectiveness of Old Spice Nocturnal Creatures Scent compared to Dr. Will Brown's Nutty Time Special Attar (tm) depend on whether your haberdasher is Saks Fifth Avenue, or flophouse? Does the transcendental experience of devouring jar after jar of Nutella depend on whether you are alone, or noshing on it with friends? (Answer: As long as there's enough to go around, let the good times roll, baby!)

So much for the interaction part, but what about this psychophysiological thing? The "psychophysiological" term is composed of two parts, namely - and I'm not trying to insult your intelligence here - a psychological component and a physiological component. The "psychological" part of a PPI refers to which condition your participant was exposed to; perhaps the condition where she was instructed to imagine a warm, fun, exciting environment, such as reading Andy's Brain Blog with all of her sorority sisters during a pajama party on a Saturday night and trying to figure out what, exactly, makes that guy tick. Every time she imagines this, we code it with a 1; every time she imagines a contrasting condition, such as reading some other blog, we code that with a -1. And everything else gets coded as a 0.

Here are pictures to make this more concrete:

Figure 1: Sorority girls reading Andy's Brain Blog (aka Condition 1)
Figure 2: AvsBcoding with 1's, -1's, and 0's
The physiological component, on the other hand, simply refers to the underlying neural activity. Remember, however, that what we see in a typical FMRI timeseries isn't the actual neural activity; it's neural activity that has been smeared (i.e., convolved) with the hemodynamic response function, or HRF. To convert this back to the neural timeseries, we de-smear (i.e., deconvolve) the timeseries by removing the HRF - similar to a Fourier analysis if you have done one of those. (You have? NERD!)

To summarize: we assume that the observed timeseries is a convolution of the underlying neural activity with the HRF. Using NA for neural activity, HRF for the gamma response function, and (x) for the Kroenecker product, symbolically this looks like:

TimeSeries = NA (x) HRF

To isolate the NA part of the equation, we extract the TimeSeries from a contrast or some anatomically defined region, either using the AFNI Clusterize GUI and selecting the concatenated functional runs as an Auxiliary TimeSeries, or through a command such as 3dmaskave or 3dmaskdump. (In either case, make sure that the text file is one column, not one row; if it is one row, use 1dtranspose to fix it.) Let's call this ideal time series Seed_ts.1D.

Next, we generate an HRF using the waver command and shuffle it into a text file:

waver -dt [TR] -GAM -inline 1@1 > GammaHR.1D

Check this step using 1dplot:

1dplot GammaHR.1D

And you should see something like this:

Now that we have both our TimeSeries and our HRF, we can calculate NA, the underlying neural activity, using 3dTfitter (the -Faltung option, by the way, is German for "deconvolution"):

3dTfitter -RHS Seed_ts.1D -FALTUNG GammaHR.1D Seed_Neur 012 0

This will tease apart GammaHR.1, from the TimeSeries Seed_ts.1D, and store it in a file called Seed_Neur.1D. This should look similar to the ideal time course we generated in the first step, except now it is in "Neural time":

1dplot Seed_Neur.1D


Ausgezeichnet! All we have to do now is multiply this by the AvsBcoding we made before, and we will have our psychophysiological interaction!

1deval -a Seed_Neur.1D -b AvsBcoding.1D -expr 'a*b' -prefix Inter_neu.1D

This will generate a file, Inter_neu.1D, that is our interaction at the neural level:

1dplot Inter_neu.1D

And, finally, we take it back into "BOLD time" by convolving this timeseries with the canonical HRF:

waver -dt 2 -GAM -input Inter_neu.1D -numout [numTRs] > Inter_ts.1D

1dplot Inter_ts.1D

Let's pause and review what we've done so far:
  1. First, we extracted a timeseries from a voxel or group of voxels somewhere in the brain, either defined by a contrast or an anatomical region (there are several other ways to do this, of course; these are just the two most popular);
  2. We then created a list of 1's, -1's, and 0's for our condition, contrasting condition, and all the other stuff, with one number for each TR;
  3. Next, we created a Gamma basis function using the waver command, simulating the canonical HRF;
  4. We used 3dTfitter to decouple the HRF from the timeseries, moving from "BOLD time" to "Neural time";
  5. We created our psychophysiological interaction by multiplying our AvsBcoding by the Neural TimeSeries; and
  6. Convolved this interaction with the HRF in order to move back to "BOLD time".

Our last step is to enter this interaction and the region's timeseries into our model. Since we are entering this into the model directly as is, and since we are not convolving it with anything, we will use the -stim_file option. Also remember to not use the -stim_base option, which is usually paired with -stim_file; we are still estimating parameters for this regressor, not just including it in the baseline model.

Below, I have slightly modified the 3dDeconvolve script used in the AFNI_data6 directory. The only two lines that have been changed are the stim_file options for regressors 9 and 10:

 #!/bin/tcsh

set subj = "FT"

# run the regression analysis
3dDeconvolve -input pb04.$subj.r*.scale+orig.HEAD                       \
    -polort 3                                                           \
    -num_stimts 10                                                       \
    -stim_times 1 stimuli/AV1_vis.txt 'BLOCK(20,1)'                     \
    -stim_label 1 Vrel                                                  \
    -stim_times 2 stimuli/AV2_aud.txt 'BLOCK(20,1)'                     \
    -stim_label 2 Arel                                                  \
    -stim_file 3 motion_demean.1D'[0]' -stim_base 3 -stim_label 3 roll  \
    -stim_file 4 motion_demean.1D'[1]' -stim_base 4 -stim_label 4 pitch \
    -stim_file 5 motion_demean.1D'[2]' -stim_base 5 -stim_label 5 yaw   \
    -stim_file 6 motion_demean.1D'[3]' -stim_base 6 -stim_label 6 dS    \
    -stim_file 7 motion_demean.1D'[4]' -stim_base 7 -stim_label 7 dL    \
    -stim_file 8 motion_demean.1D'[5]' -stim_base 8 -stim_label 8 dP    \
    -stim_file 9 Seed_ts.1D -stim_label 9 Seed_ts    \
    -stim_file 10 Inter_ts.1D -stim_label 10 Inter_ts    \

    -gltsym 'SYM: Vrel -Arel' -x1D X.xmat.1D -tout -xjpeg X.jpg                             \
    -bucket stats_Inter.$subj -rout



This will generate a beta weights for the Interaction term, which can then be taken to a group-level analysis using your command of choice. You can also do this with the R-values, but be aware that 3dDeconvolve will generate R-squared values; you will need to take the square root of these and then apply a Fisher's R-to-Z transform before you do a group analysis. Because of the extra steps involved, and because interpreting beta coefficients is much more straightforward, I recommend sticking with the beta weights.

Unless you're a real man like my dating coach, who recommends alpha weights.

Introduction to SPM Marsbar

Marsbar is an extraction tool designed to output beta estimates or contrast estimates from a region of interest (ROI), a cluster of voxels defined either anatomically, or through an independent contrast. I covered this in an earlier post, but thought that this would lend itself better to a bright, vibrant, visual tutorial, rather than the musty arrow charts.

How to define ROIs from coordinates

How to define ROIs from other contrasts

AFNI Bootcamp: Day 3

Today, Bob began with an explanation for how AFNI’s amplitude modulation (cf. SPM’s parametric modulation) differs from other software approaches. For one, not only are the estimates for each parametric modulation computed, but so is the estimate of the beta itself. This leads to estimates of what variance can be explained by the parametric modulators, above and beyond the beta itself. Then, the beta estimates for those parametric modulators can be carried to the second level, just like any other parameter estimate.

To give a concrete example, take an experiment that presents the subject with a gamble that varies on three specific dimensions: probability of win, magnitude of win, and the variance of the gamble. Let us say that the onset of the gamble occurred at 42 seconds into the run, and that the dimensions were 0.7, 10, and 23. In this case, the onset time of the gamble would be parametrically modulated by these dimensions, and would be represented in a timing file as 42*0.7,10,23.  [Insert AM_Example.jpg here]. Notice that the resulting parametric modulators are mean-centered, here resulting in negative values for probability and variance. The purpose of the amplitude modulation is to see what proportion of the variance in the BOLD response is due to these additional variables driving the amplitude of the BOLD signal; if it is a particularly good fit, then the resulting t-statistic for that beta weight will be relatively high.

Regarding this, 3dREMLfit was mentioned yet again, as Bob pointed out how it takes into account both the beta estimate and the variance surrounding that estimate (i.e., the beta estimate’s associated t-statistic). A high beta estimate does not necessarily imply a high t-statistic, and vice versa, which is why it would make sense to include this information at the group level. However, none of the AFNI developer’s that I talked to definitively stated that 3dMEMA or the least-squares method was preferable; that is entirely up to the user. I examined this with my own data, looking at a contrast of two regressors at the 2nd-level using both OLSQ and 3dMEMA. As the resulting pictures show, both methods show patterns of activation in the rostral ACC (similar to what I was getting with SPM), although 3dMEMA produces an activation map that passes cluster correction, while OLSQ does not. Which should you use? I don’t know. I suppose you can try both, and whatever gives you the answer that you want, you should use. If you use 3dMEMA and it doesn’t give you the result that you want, you can just claim that it’s too unreliable to be used just yet, and so make yourself feel better about using a least-squares approach.

After a short break, Ziad discussed AFNI’s way of dealing with resting state data via a program called RETROICOR. I have no idea what that stands for, but it accounts for heart rate and respiration variability for each subject, which is critically important when interpreting a resting state dataset. Because the relationship between physiological noise –especially heart rate – and the BOLD signal is poorly understood, it is reasonable to covary this activation out, in order to be able to claim that what you are looking at is true differences in neural activation between conditions or groups or whatever you are investigating (although using the term “neural activation” is a bit of a stretch here). Apparently this is not done that often, and neither is accounting for motion, which can be a huge confound as well.  All of the confounds listed above can lead to small effect sizes biased in a certain direction, but can do so consistently, leading to significant activation at the group level that has nothing to do with the participant’s true resting state (although again, the term “resting state” is a bit of a stretch, since you have no idea what the participant is doing, and no tasks to regress to explain the timeseries data). In any case, this is an area I know very little about, but the potential pitfalls seem serious enough to warrant staying away from this unless I have a really good reason for doing it.