What's in an SPM.mat File?

A while back I attempted to write up a document that summarized everything someone would need to know about using SPM. It was called, I believe, the SPM User's Ultimate Guide: For Alpha Males, By Alpha Males. The title was perhaps a little ambitious, since I stopped updating it after only a few dozen pages. In particular I remember a section where I attempted to tease apart everything contained within the SPM.mat files output after first- and second-level analyses. To me this was the most important section, since complex model setups could be executed fairly easily by someone with a good understanding of Matlab code, but I never completed it.

Fortunately, however, there is someone out there who already vivisected the SPM.mat file and publicly displayed its gruesome remains in the online piazza. Researcher Nikki Sullivan has written an excellent short summary of what each field means, broken down into neat, easily digestible categories. You can find it on her website here, and I have also copied and pasted the information below. It makes an excellent shorthand reference, especially if you've forgotten, for example, where contrast weights are stored in the structure, and don't want to go through the tedium of typing SPM, then SPM.xY, then SPM.xY.VY, and so on.

But if you've forgotten how to rock that body? Girl, ain't no remedy for that.


========================

details on experiment:

 

SPM.xY.RT - TR length (RT ="repeat time")
SPM.xY.P - matrix of file names
SPM.xY.VY - # of runs x 1 struct array of mapped image volumes (.img file info)
SPM.modality - the data you're using (PET, FMRI, EEG)
SPM.stats.[modality].UFp - critical F-threshold for selecting voxels over which the non-sphericity is estimated (if required) [default: 0.001]
SPM. stats.maxres - maximum number of residual images for smoothness estimation
SPM. stats.maxmem - maximum amount of data processed at a time (in bytes)
SPM.SPMid - version of SPM used
SPM.swd - directory for SPM.mat and img files. default is pwd

basis function:

 

SPM.xBF.name - name of basis function
SPM.xBF.length - length in seconds of basis
SPM.xBF.order - order of basis set
SPM.xBF.T - number of subdivisions of TR
SPM.xBF.T0 - first time bin (see slice timing)
SPM.xBF.UNITS - options: 'scans'|'secs' for onsets
SPM.xBF.Volterra - order of convolution
SPM.xBF.dt - length of time bin in seconds
SPM.xBF.bf - basis set matrix

Session Stucture:

 

user-specified covariates/regressors (e.g. motion)
SPM.Sess([sesssion]).C.C - [nxc double] regressor (c#covariates,n#sessions)
SPM.Sess([sesssion]).C.name - names of covariates
conditions & modulators specified - i.e. input structure array
SPM.Sess([sesssion]).U(condition).dt: - time bin length {seconds}
SPM.Sess([sesssion]).U(condition).name - names of conditions
SPM.Sess([sesssion]).U(condition).ons - onset for condition's trials
SPM.Sess([sesssion]).U(condition).dur - duration for condition's trials
SPM.Sess([sesssion]).U(condition).u - (t x j) inputs or stimulus function matrix
SPM.Sess([sesssion]).U(condition).pst - (1 x k) peri-stimulus times (seconds)
parameters/modulators specified
SPM.Sess([sesssion]).U(condition).P - parameter structure/matrix
SPM.Sess([sesssion]).U(condition).P.name - names of modulators/parameters
SPM.Sess([sesssion]).U(condition).P.h - polynomial order of modulating parameter (order of polynomial expansion where 0none)
SPM.Sess([sesssion]).U(condition).P.P - vector of modulating values
SPM.Sess([sesssion]).U(condition).P.P.i - sub-indices of U(i).u for plotting
scan indices for sessions
SPM.Sess([sesssion]).row
effect indices for sessions
SPM.Sess([sesssion]).col
F Contrast information for input-specific effects
SPM.Sess([sesssion]).Fc
SPM.Sess([sesssion]).Fc.i - F Contrast columns for input-specific effects
SPM.Sess([sesssion]).Fc.name - F Contrast names for input-specific effects
SPM.nscan([session]) - number of scans per session (or if e.g. a t-test, total number of con*.img files)

global variate/normalization details

 

SPM.xGX.iGXcalc - either "none" or "scaling." for fMRI usually is "none" (no global normalization). if global normalization is "Scaling", see spm_fmri_spm_ui for parameters that will then appear under SPM.xGX.

design matrix information:

 

SPM.xX.X - Design matrix (raw, not temporally smoothed)
SPM.xX.name - cellstr of parameter names corresponding to columns of design matrix
SPM.xX.I - nScan x 4 matrix of factor level indicators. first column is the replication number. other columns are the levels of each experimental factor. SPM.xX.iH - vector of H partition (indicator variables) indices
SPM.xX.iC - vector of C partition (covariates) indices
SPM.xX.iB - vector of B partition (block effects) indices
SPM.xX.iG - vector of G partition (nuisance variables) indices
SPM.xX.K - cell. low frequency confound: high-pass cutoff (secs)
SPM.xX.K.HParam - low frequency cutoff value
SPM.xX.K.X0 - cosines (high-pass filter)
SPM.xX.W - Optional whitening/weighting matrix used to give weighted least squares estimates (WLS).
  • if not specified spm_spm will set this to whiten the data and render the OLS estimates maximum likelihood i.e. W*W' inv(xVi.V).
SPM.xX.xKXs - space structure for K*W*X, the 'filtered and whitened' design matrix
SPM.xX.xKXs.X - Mtx - matrix of trials and betas (columns) in each trial
SPM.xX.xKXs.tol - tolerance
SPM.xX.xKXs.ds - vectors of singular values
SPM.xX.xKXs.u - u as in X u*diag(ds)*v'
SPM.xX.xKXs.v - v as in X u*diag(ds)*v'
SPM.xX.xKXs.rk - rank
SPM.xX.xKXs.oP - orthogonal projector on X
SPM.xX.xKXs.oPp - orthogonal projector on X'
SPM.xX.xKXs.ups - space in which this one is embedded
SPM.xX.xKXs.sus - subspace
SPM.xX.pKX - pseudoinverse of K*W*X, computed by spm_sp
SPM.xX.Bcov - xX.pKX*xX.V*xX.pKX - variance-covariance matrix of parameter estimates (when multiplied by the voxel-specific hyperparameter ResMS of the parameter estimates (ResSS/xX.trRV ResMS) )
SPM.xX.trRV - trace of R*V
SPM.xX.trRVRV - trace of RVRV
SPM.xX.erdf - effective residual degrees of freedom (trRV^2/trRVRV)
SPM.xX.nKX - design matrix (xX.xKXs.X) scaled for display (see spm_DesMtx('sca',... for details) SPM.xX.sF - cellstr of factor names (columns in SPM.xX.I, i think) SPM.xX.D - struct, design definition SPM.xX.xVi - correlation constraints (see non-sphericity below) SPM.xC - struct. array of covariate info

header info

 

SPM.P - a matrix of filenames
SPM.V - a vector of structures containing image volume information.
SPM.V.fname - the filename of the image.
SPM.V.dim - the x, y and z dimensions of the volume
SPM.V.dt - A 1x2 array. First element is datatype (see spm_type). The second is 1 or 0 depending on the endian-ness.
SPM.V.mat- a 4x4 affine transformation matrix mapping from voxel coordinates to real world coordinates.
SPM.V.pinfo - plane info for each plane of the volume.
SPM.V.pinfo(1,:) - scale for each plane
SPM.V.pinfo(2,:) - offset for each plane The true voxel intensities of the jth image are given by: val*V.pinfo(1,j) + V.pinfo(2,j)
SPM.V.pinfo(3,:) - offset into image (in bytes).If the size of pinfo is 3x1, then the volume is assumed to be contiguous and each plane has the same scalefactor and offset.

structure describing intrinsic temporal non-sphericity

 

SPM.xVi.I - typically the same as SPM.xX.I SPM.xVi.h - hyperparameters
SPM.xVi.V xVi.h(1)*xVi.Vi{1} + ...
SPM.xVi.Cy - spatially whitened (used by ReML to estimate h)
SPM.xVi.CY - <(Y - )*(Y - )'>(used by spm_spm_Bayes)
SPM.xVi.Vi - array of non-sphericity components

  • defaults to {speye(size(xX.X,1))} - i.ii.d.
  • specifying a cell array of contraints ((Qi)
  • These contraints invoke spm_reml to estimate hyperparameters assuming V is constant over voxels that provide a high precise estimate of xX.V
SPM.xVi.form - form of non-sphericity (either 'none' or 'AR(1)')
SPM.xX.V - Optional non-sphericity matrix. CCov(e)sigma^2*V.
  • If not specified spm_spm will compute this using a 1st pass to identify signifcant voxels over which to estimate V. A 2nd pass is then used to re-estimate the parameters with WLS and save the ML estimates (unless xX.W is already specified)
 

filtering information

 

SPM.K - filter matrix or filtered structure:
  • SPM.K(s) - struct array containing partition-specific specifications
  • SPM.K(s).RT - observation interval in seconds
  • SPM.K(s).row - row of Y constituting block/partitions
  • SPM.K(s).HParam - cut-off period in seconds
  • SPM.K(s).X0 - low frequencies to be removed (DCT)
  • SPM.Y - filtered data matrix
 

masking information

 

SPM.xM - Structure containing masking information, or a simple column vector of thresholds corresponding to the images in VY.
SPM.xM.T - [n x 1 double] - Masking index
SPM.xM.TH - nVar x nScan matrix of analysis thresholds, one per image
SPM.xM.I - Implicit masking (0 --> none; 1 --> implicit zero/NaN mask)
SPM.xM.VM - struct array of mapped explicit mask image volumes
SPM.xM.xs - [1x1 struct] cellstr description

design information (self-explanatory names, for once) 

 

SPM.xsDes.Basis_functions - type of basis function
SPM.xsDes.Number_of_sessions
SPM.xsDes.Trials_per_session
SPM.xsDes.Interscan_interval
SPM.xsDes.High_pass_Filter
SPM.xsDes.Global_calculation
SPM.xsDes.Grand_mean_scaling
SPM.xsDes.Global_normalisation

details on scannerdata (e.g. smoothness) 

 

SPM.xVol - structure containing details of volume analyzed
SPM.xVol.M- 4x4 voxel --> mm transformation matrix
SPM.xVol.iM - 4x4 mm --> voxel transformation matrix
SPM.xVol.DIM - image dimensions - column vector (in voxels)
SPM.xVol.XYZ - 3 x S vector of in-mask voxel coordinates
SPM.xVol.S- Lebesgue measure or volume (in voxels)
SPM.xVol.R- vector of resel counts (in resels)
SPM.xVol.FWHM - Smoothness of components - FWHM, (in voxels)

info on beta files:

 

SPM.Vbeta - struct array of beta image handles
SPM.Vbeta.fname - beta img file names
SPM.Vbeta.descrip - names for each beta file

info on variance of the error

 

SPM.VResMS - file struct of ResMS image handle
SPM.VResMS.fname - variance of error file name

info on mask

 

SPM.VM - file struct of Mask image handle
SPM.VM.fname - name of mask img file

contrast details (added after running contrasts)

 

SPM.xCon - Contrast definitions structure array
  • (see also spm_FcUtil.m for structure, rules &handling)
SPM.xCon.name - Contrast name
SPM.xCon.STAT - Statistic indicator character ('T', 'F' or 'P')
SPM.xCon.c - Contrast weights (column vector contrasts)
SPM.xCon.X0 - Reduced design matrix data (spans design space under Ho)
  • Stored as coordinates in the orthogonal basis of xX.X from spm_sp
  • (Matrix in SPM99b)
  • Extract using X0 spm_FcUtil('X0',...
SPM.xCon.iX0 - Indicates how contrast was specified:
  • If by columns for reduced design matrix then iX0 contains the column indices.
  • Otherwise, it's a string containing the spm_FcUtil 'Set' action: Usually one of {'c','c+','X0'} defines the indices of the columns that will not be tested. Can be empty.
SPM.xCon.X1o - Remaining design space data (X1o is orthogonal to X0)
  • Stored as coordinates in the orthogonal basis of xX.X from spm_sp (Matrix in SPM99b) Extract using X1o spm_FcUtil('X1o',...
SPM.xCon.eidf - Effective interest degrees of freedom (numerator df)
  • Or effect-size threshold for Posterior probability
SPM.xCon.Vcon - Name of contrast (for 'T's) or ESS (for 'F's) image
SPM.xCon.Vspm - Name of SPM image

Multi-Variate Pattern Analysis (MVPA): An Introduction

Alert reader and fellow brain enthusiast Keith Bartley loves MVPA, and he's not ashamed to admit it. In fact, he loves MVPA so much that he was willing to share what he knows with all of us, as well as a couple of MVPA libraries he has ported from Matlab to Octave.

Why don't I write a post on MVPA, you ask? Maybe because I don't know doodley-squat about it, and maybe because I'm man enough to admit what I don't know. In my salad days when I was green in judgment I used to believe that I needed to know everything; nowadays, I get people to do that for me. This leaves me more time for gentlemanly activities, such as hunting, riding, shooting, fishing, and wenching.

So without further delay, here it is: An introduction to MVPA for those who are curious about it and maybe even want to do it themselves, but aren't exactly sure where to start. Be sure to check out the links at the end of the post for links to Keith's work, as well as Octave distributions he has been working on.


==================================

Well, it had to happen sometime. Time to call out the posers. Those who would bend neuroscience to their evil will. Where to start? How about the New York Times. Oh, New York Times. You've managed to play both sides of this coin equally well. Creating a neuroimaging fallout with bad reports, then throwing out neuroscience studies altogether ("The brain is not the mind" <- What? Stop. Wait, who are you again? Just stop). What's to be said for those few that rise above the absolutes of "newspaper neuroscience", and simply report findings, not headline grabbing inferences? Should we be opposed to gained knowledge, or seek to better understand the details? Boring you say? Pfft. Then you have yet to know the neuroscience I know.

As an example, a recent NYT article suggested that activation in both the primary auditory and visual cortices in response to an iPhone ring alone demonstrated that people were seeing the iPhone in their mind's eye.

...
This is a completely RIDICULOUS notion to suggest from univariate GLM BOLD response alone. But can we ever really know what a person sees in their mind's eye or hears in their mind's ear? Remember, good science happens in the setup, not the interpretation.

You may have heard of an increasingly implemented form of machine classification in brain-imaging dubbed Multi-Variate Pattern Analysis (MVPA). If you prefer the term "mind reading", sure go ahead, but remember to bring your scientific floaties, lest you drown from the lack of understanding as it escapes the proverbial lungs of your deoxygenated mind. Before you know it, you'll be believing things like this:


Verbose laughter ensues. Close, but no Vicodin. How does MVPA of BOLD responses really work?


Source: Kaplan & Meyer, 2011

By first training a program to recognize a pattern, in much the same way you might visually recognize a pattern, we can quantitatively measure the similarity or dissimilarity of patterns in response to differing conditions. But what does this have to do with knowing what a person might see in their mind? I recently compiled some soundless videos for a demonstration of just this. Watch them fullscreen for the greatest effect.



It's likely that the sound of coins clanking, a car crashing, or Captain Kirk screaming "KAHHHHN!!!" registered in your mind, and thus created distinct neural representations in your primary auditory cortex. If we train a machine classifier to recognize your brain's patterned neural response to the sound, we can then ask the classifier to guess which among all these stimuli your brain is distinctly representing in the absence of sound. If the machine classifier is successful statistically more than null permutations of chance, THEN we can MAYBE begin to suggest a top-down neural mechanism of alternate sensory modality stimulation. 

BUT, I promised you details, and details you shall have!

As a fan of Princeton's MVPA Matlab toolbox, opening it up for people without access to a Matlab license seemed like the next most logical step, prompting me to convert the toolbox, as well as its implementations of SPM and AFNI-for-MATLAB, to one unitary Octave distribution. Below is a link to my website and Github page. There you will find setup instructions as well as tutorial data and a script that Dr. Marc Coutanche (See his rigorously awesome research in MVPA here) and I implemented in a recent workshop. Coded are step-by-step instructions to complete what is a research caliber implementation of pattern analysis. Learn it, live it, repeat it, and finally bask in your ability to understand something that would make your grandma's head explode (See the Flynn Effect for possible explanation there). As with anything I post, if you have questions, comments, or criticisms, feel free to message me through any medium of communication ...unless you are a psychic. I'd like to keep my mind's ear to myself. 


De Profundis

Since school started back up a little over a month ago, I've been teaching my own introductory psychology course - the first one I've ever taught. It's been great; when I do it, I feel happy, energized, useful. Not only do I get to teach a couple of hundred persons, some of whom really like the class, but I even get to write my own test questions. If ten years ago you told me I would be lecturing in front of college students and creating exams, I would've told you to go kick rocks. But oddly enough, I enjoy it. For instance, I get to come up with test questions like this:

Sigmund Freud was:

A) correct
B) completely insane
C) a cocaine abuser
D) all of the above


Understandably, the writing here took a hit, as I left the mistress of my youth, blogging, for the wife of mine age, teaching. I figured that whatever basics someone needed to know they could find somewhere in the archives. I thought that I probably was spending too much time on it anyway, feeling a distinct negative correlation between the amount of material that was posted on here and my actual productivity as a researcher. Maybe once in while, I thought, I would post about relationship advice, or books I was reading, or possibly a sequel to my post about erotic neuroimaging journal titles. In any case, though, I thought that I written most of what I had wanted to. Quod scripsi, scripsi.

However, I got a jolt a couple of days ago when the government shut down. Normally this wouldn't affect me all that much, except for the AFNI developers sending out a warning that the entire website would be shut down and the message boards closed until further notice. Apparently it's illegal for federal employees to use government emails for correspondence during a shutdown. No more AFNI support; no more new AFNI materials; no more infamous Bob Cox insults, put-downs, and slams. All gone.

Which made me realize, God forbid, that AFNI won't be around forever. I doubt that even fMRI will be all that popular twenty, thirty years from now, as technology progresses and we find some new way to image the brain yet still get significant results no matter how badly we screw up. But the need for understanding the processes will still be there; people will always want easy, accessible introductions for learning new tools and new methods. So I will continue to do that, no matter how incomplete my understanding, no matter how many miles I run, no matter how many jars of Nutella I devour. And that's a campaign promise.

At least one researcher out there has offered to write about topics that I am either entirely ignorant of, or only have a limited grasp. Starting soon, I will begin to interleave some of those posts with what I write to make the blog more centralized for information relating to fMRI. In the meantime, if anyone reading wants to contribute, just let me know. We'll see how it works out.




Real Ultimate Power

I once read an article by a man who guaranteed that by following a series of simple steps, your blog would become famous overnight and make you filthy rich. He pointed to himself as a case study, having founded a blog devoted to nutrition advice and dietary tips for Caucasian males. He called it White Man's Secret. Within weeks, he claimed, he had millions of unique hits every single day from all over the world. Tens of thousands of women left their husbands or lovers and became his willing slaves. He was elected president of his Rotary chapter and was voted Best Table Topics Speaker at his local Toastmasters five times in a row. And he had done it all by following a series of simple steps that anybody could do.

Most important, he said - or screamed, rather, since everything was in upper case capital letters - was to never apologize for failing to update regularly. People wouldn't even notice it, he said, unless you called attention to it. He recommended treating readers like any one of the members of his now-overflowing seraglio - by playing with their emotions through deliberate neglect. Then, when they were at their most desperate, you would give them some attention. Not much. Just enough for them to become dependent on you.

This, he said, was an expression of power. Real Ultimate Power.

________________________________________


My reason for neglect isn't nearly as insidious. It's because I've been running too much.

A few weeks ago when I visited home I sat down with my dad and a calendar and we made a training program leading up to the Indianapolis marathon in November. At the beginning of each week he wrote the number of weeks left until race day. So, "12," "11," "10," and so on, until "0." Then in each cell of the calendar we calculated how many workouts and how many miles I would need to run to fill a weekly quota. We determined this using Jack Daniels' Running Formula.

I'm not talking about the stuff created by yeast poop. Jack Daniels is a real person. He is a coach with a doctorate in exercise physiology and fifteen years ago he published a book called Daniels' Running Formula. He also popularized the use of a physiological measurement called VDOT, which measures the maximum amount of oxygen used by the body. I have had my VDOT measured a couple of times by exercise physiologists, and both times the process was extremely uncomfortable. They put you on a treadmill and stick a tube in your mouth and then pinch your nostrils closed with a clothespin. You then start running, and the treadmill speed and incline increases every minute or so until you can't run anymore.

The last time I did this, the researchers told me that my VDOT was seventy-point-three. Jack Daniels has a specific workout plan for nearly any VDOT number. And so seventy-point-three was the number my dad and I were using when we filled out the calendar.

________________________________________


Here is what Jack Daniels says I should do in a typical week:

Sunday: 13-15 miles at marathon pace (5:40/mile); 2 miles warmup, 2 miles cooldown.
Monday: 10 miles
Tuesday: 12 miles in the morning, 4 miles in the evening
Wednesday: 8 mile recovery run
Thursday: 2 mile warmup. 2x12 minutes of anaerobic threshold pace (5:20/mile). 4 mile recovery jog followed by another 12 minutes anaerobic threshold pace (5:20/mile).
Friday: 12 miles
Saturday: 12 miles in the morning, 6 miles in the evening


When I read this I thought that Jack Daniels was nuttier than squirrel shit. Obviously there are some people who disagree; there are some out there who think this is easy and no big deal. Fly them.

I've stuck with it nonetheless; and the odd thing is, it seems to be working. My runs are getting easier; workouts are getting better; I feel more prepared and more motivated. The only problem is that during the day I'm too tired to do anything else. I get up in the morning around six and am usually done around seven-thirty. For the next hour or so I stretch a little bit and drink chocolate soy milk straight out of the carton. I put a towel on the living room floor and lie there sweating, and sometimes put on music I checked out from the library. For the past month or so I've been listening primarily to jazz - Duke Ellington, Earl Klugh, Ella Fitzgerald, Bill Evans, Thelonious Monk, to name a few. I listen to some classical stuff as well. This morning as I laid on my sweat-soaked towel on the floor, I listened to a sonata by Ravel. It was heavenly.

Then I go to work and feel the slow burn in my legs for the rest of the day. In a way it feels almost pleasurable. And then when I get home sometimes I run again. Sometimes I will see other people out there running, and wonder about whether they are faster than I am.

Some would call this insecurity. They're probably right. In any case, I like to call it competitiveness.

________________________________________


Speaking of competitiveness: A couple of years ago a girl told me that one of her previous boyfriends had run a marathon in two hours and thirty-two minutes. At the time, that was faster than I had run a marathon, and so naturally I became insanely jealous. It didn't help that I was hopelessly in love with this girl. The fact that she had dated someone with a faster marathon time was an insult to me.

For the next nine months I trained like a banshee and that summer I ran a marathon in two hours and thirty minutes. Somewhere in the back of my mind I was convinced that when I told her my marathon time, she would rip off all her clothes with uncontrollable passion.

That never happened. The next time she saw me she didn't even take off her sunglasses. Things have been pretty much the same.

________________________________________

By the way: When someone wants to run with you, wait for a really cold morning and tell them that you want to run then. At the last minute they will come up with an excuse not to go.

You know why? Because running in the cold sucks.


Pro Tip: Changing SPM Output Files



A couple of weeks ago, alert reader Paula DiNoto asked about how to output SPM files in NIFTI format instead of ANALYZE (.hdr/.img). Apparently, for those of you bleeding-edge adrenaline junkies who just have to have the latest thing, SPM12 doesn't give you this option in the interface. SPM5, however - you know, the one that the rest of us barbarians use - still does. (I'll save my gloating for when I meet you face-to-face at the conferences.)

Getting back to the story, I had to admit to Paula that I was, for the first time in my life, unable to figure out how to troubleshoot the problem. However, this was merely a ruse to get Paula motivated; she ended up solving the problem by changing "spm_defaults.m" and changing the default.images.format field to 'nii'. Obviously I already knew this, but I was pleased to see her figure it out for herself.


Thanks again to Paula, whom I once called, in a moment of self-forgetfulness, the most wonderful person I had ever met.

Back Next Week



Fellow brainbloggers,

I'll be going home for a few days - running a few races, seeing a few friends, eating a few pizzas, kicking back a few Shirley Temples. Getting that messed up on Sprite and grenadine means that updates may be scarce. When I return, though, I will be going to FMRI pound-town on those resting-state analyses and hit you with the full report as promised.


Eric my man, take us outta here.

Psychophysiological Interactions Explained

That last tutorial on psychophysiological interactions - all thirteen minutes of it - may have been a little too much to digest in one sitting. However, you also don't understand what it's like to be in the zone, with the AFNI commands darting from my fingertips and the FMRI information tumbling uncontrollably from my lips; sometimes I get so excited that I accidentally spit a little when I talk, mispronounce words like "particularly," and am unable to string two coherent thoughts together. It's what my dating coach calls the vibe.

With that in mind, I'd like to provide some more supporting commentary. Think of it as supplementary material - that everybody reads!

Psychophysiological interactions (PPIs) are correlations that change depending on a given condition, just as in statistics. (The concept is identical, actually.) For example: Will a drug affect your nervous system differently from a placebo depending on whether you have been drinking or not? Does the effectiveness of Old Spice Nocturnal Creatures Scent compared to Dr. Will Brown's Nutty Time Special Attar (tm) depend on whether your haberdasher is Saks Fifth Avenue, or flophouse? Does the transcendental experience of devouring jar after jar of Nutella depend on whether you are alone, or noshing on it with friends? (Answer: As long as there's enough to go around, let the good times roll, baby!)

So much for the interaction part, but what about this psychophysiological thing? The "psychophysiological" term is composed of two parts, namely - and I'm not trying to insult your intelligence here - a psychological component and a physiological component. The "psychological" part of a PPI refers to which condition your participant was exposed to; perhaps the condition where she was instructed to imagine a warm, fun, exciting environment, such as reading Andy's Brain Blog with all of her sorority sisters during a pajama party on a Saturday night and trying to figure out what, exactly, makes that guy tick. Every time she imagines this, we code it with a 1; every time she imagines a contrasting condition, such as reading some other blog, we code that with a -1. And everything else gets coded as a 0.

Here are pictures to make this more concrete:

Figure 1: Sorority girls reading Andy's Brain Blog (aka Condition 1)
Figure 2: AvsBcoding with 1's, -1's, and 0's
The physiological component, on the other hand, simply refers to the underlying neural activity. Remember, however, that what we see in a typical FMRI timeseries isn't the actual neural activity; it's neural activity that has been smeared (i.e., convolved) with the hemodynamic response function, or HRF. To convert this back to the neural timeseries, we de-smear (i.e., deconvolve) the timeseries by removing the HRF - similar to a Fourier analysis if you have done one of those. (You have? NERD!)

To summarize: we assume that the observed timeseries is a convolution of the underlying neural activity with the HRF. Using NA for neural activity, HRF for the gamma response function, and (x) for the Kroenecker product, symbolically this looks like:

TimeSeries = NA (x) HRF

To isolate the NA part of the equation, we extract the TimeSeries from a contrast or some anatomically defined region, either using the AFNI Clusterize GUI and selecting the concatenated functional runs as an Auxiliary TimeSeries, or through a command such as 3dmaskave or 3dmaskdump. (In either case, make sure that the text file is one column, not one row; if it is one row, use 1dtranspose to fix it.) Let's call this ideal time series Seed_ts.1D.

Next, we generate an HRF using the waver command and shuffle it into a text file:

waver -dt [TR] -GAM -inline 1@1 > GammaHR.1D

Check this step using 1dplot:

1dplot GammaHR.1D

And you should see something like this:

Now that we have both our TimeSeries and our HRF, we can calculate NA, the underlying neural activity, using 3dTfitter (the -Faltung option, by the way, is German for "deconvolution"):

3dTfitter -RHS Seed_ts.1D -FALTUNG GammaHR.1D Seed_Neur 012 0

This will tease apart GammaHR.1, from the TimeSeries Seed_ts.1D, and store it in a file called Seed_Neur.1D. This should look similar to the ideal time course we generated in the first step, except now it is in "Neural time":

1dplot Seed_Neur.1D


Ausgezeichnet! All we have to do now is multiply this by the AvsBcoding we made before, and we will have our psychophysiological interaction!

1deval -a Seed_Neur.1D -b AvsBcoding.1D -expr 'a*b' -prefix Inter_neu.1D

This will generate a file, Inter_neu.1D, that is our interaction at the neural level:

1dplot Inter_neu.1D

And, finally, we take it back into "BOLD time" by convolving this timeseries with the canonical HRF:

waver -dt 2 -GAM -input Inter_neu.1D -numout [numTRs] > Inter_ts.1D

1dplot Inter_ts.1D

Let's pause and review what we've done so far:
  1. First, we extracted a timeseries from a voxel or group of voxels somewhere in the brain, either defined by a contrast or an anatomical region (there are several other ways to do this, of course; these are just the two most popular);
  2. We then created a list of 1's, -1's, and 0's for our condition, contrasting condition, and all the other stuff, with one number for each TR;
  3. Next, we created a Gamma basis function using the waver command, simulating the canonical HRF;
  4. We used 3dTfitter to decouple the HRF from the timeseries, moving from "BOLD time" to "Neural time";
  5. We created our psychophysiological interaction by multiplying our AvsBcoding by the Neural TimeSeries; and
  6. Convolved this interaction with the HRF in order to move back to "BOLD time".

Our last step is to enter this interaction and the region's timeseries into our model. Since we are entering this into the model directly as is, and since we are not convolving it with anything, we will use the -stim_file option. Also remember to not use the -stim_base option, which is usually paired with -stim_file; we are still estimating parameters for this regressor, not just including it in the baseline model.

Below, I have slightly modified the 3dDeconvolve script used in the AFNI_data6 directory. The only two lines that have been changed are the stim_file options for regressors 9 and 10:

 #!/bin/tcsh

set subj = "FT"

# run the regression analysis
3dDeconvolve -input pb04.$subj.r*.scale+orig.HEAD                       \
    -polort 3                                                           \
    -num_stimts 10                                                       \
    -stim_times 1 stimuli/AV1_vis.txt 'BLOCK(20,1)'                     \
    -stim_label 1 Vrel                                                  \
    -stim_times 2 stimuli/AV2_aud.txt 'BLOCK(20,1)'                     \
    -stim_label 2 Arel                                                  \
    -stim_file 3 motion_demean.1D'[0]' -stim_base 3 -stim_label 3 roll  \
    -stim_file 4 motion_demean.1D'[1]' -stim_base 4 -stim_label 4 pitch \
    -stim_file 5 motion_demean.1D'[2]' -stim_base 5 -stim_label 5 yaw   \
    -stim_file 6 motion_demean.1D'[3]' -stim_base 6 -stim_label 6 dS    \
    -stim_file 7 motion_demean.1D'[4]' -stim_base 7 -stim_label 7 dL    \
    -stim_file 8 motion_demean.1D'[5]' -stim_base 8 -stim_label 8 dP    \
    -stim_file 9 Seed_ts.1D -stim_label 9 Seed_ts    \
    -stim_file 10 Inter_ts.1D -stim_label 10 Inter_ts    \

    -gltsym 'SYM: Vrel -Arel' -x1D X.xmat.1D -tout -xjpeg X.jpg                             \
    -bucket stats_Inter.$subj -rout



This will generate a beta weights for the Interaction term, which can then be taken to a group-level analysis using your command of choice. You can also do this with the R-values, but be aware that 3dDeconvolve will generate R-squared values; you will need to take the square root of these and then apply a Fisher's R-to-Z transform before you do a group analysis. Because of the extra steps involved, and because interpreting beta coefficients is much more straightforward, I recommend sticking with the beta weights.

Unless you're a real man like my dating coach, who recommends alpha weights.

Context-Dependent Correlations (aka PsychoPhysiological Interactions)

As promised, here is the tutorial on context-dependent correlations. Just to confuse you a little, I also refer to them as psychophysiological interactions, or PPIs, because different people call it different things. Kind of like how some people call me Andy, but other people call me Andy Pandy. Specifically, my mom.



More details about PPIs - and my childhood - coming up soon, but first I have a level 2 scanning emergency to take care of tonight. And I mean that in all seriousness, not the way I usually toss that phrase around - to get out of awkward dates. And now you know, ladies!

Future Functional Connectivity Tutorials, and Other Updates

A few notes:

1) The previous functional connectivity posts and tutorials are cribbed from Gang Chen's homepage, which is available here. Kind of like the way I crib quotations and passages from authors that no one reads anymore, and then pass it off as my own style to boost my pathologically low self-esteem. Keep in mind that most of these demonstrations deal with a single subject and simplified situations that you probably will not encounter in your research. Given these contrived examples, most of the results generated in these demos are relatively meaningless; it's up to you to learn and understand the concepts, and then apply them to your own data and make your own inferences. My task which I am trying to achieve is, by the power of Youtube tutorials, to make you hear, to make you feel — it is, before all, to make you understand. That — and no more, and it is everything. (That was Conrad, by the way.)

2) A lot of you - I'm talking a LOT of you players - have been making requests for MELODIC tutorials and resting state analyses in FSL. All I can say is, we'll get there, in time. Before that, however, I believe AFNI is better suited for building up one's intuition, and so we will be working through a few more connectivity topics in AFNI - specifically, context-dependent correlations, beta series correlations, and resting state connectivity. After that we will again cover the same concepts, but applied in FSL - by which time, given my glacial pace, either FMRI will have become a passé technique or the Andromeda galaxy will have crashed into us.

3) Recently you may have noticed the "Donate" button on the right sidebar of the blog. This was done at the request of one reader who felt the powerful, irrational urge to loosen his purse-strings and give some alms out of the goodness of his heart, which is located somewhere way, way down there, somewhere nearabouts the cockles. Although I can't fully understand this behavior - even less than I can understand why there is someone who still has purse-strings, or what cockles are, exactly - nevertheless it helps satisfy my cupidity and strokes my ego. Furthermore, in addition to serving Mammon, these tokens of gratitude motivate me to regularly produce new material and, as a bonus, help me to continue procrastinating on my dissertation. Now that's what I call a win-win-win.

4) Also, at least one of you has mailed me a two-pack of Nutella. This has pleased me greatly. My brain needs hazelnut spread for fuel, and the more it has, the hotter and better it burns.

5) If everything goes according to plan, we should cover context-dependent correlations this weekend, beta series correlations next week, and resting-state connectivity the week after that.

Lunch in Paris, dinner in London, comrade.

Creating Functional Connectivity Maps in AFNI (Second Step on the Road Towards Glory)

Having used 3dSynthesize to create a dataset with the effects of no interest, we are now ready to subtract those effects from the dataset that went into 3dDeconvolve; for you players using the AFNI_data6 tutorial data, this is the all_runs.FT+orig dataset. We can use 3dcalc for this operation:

3dcalc -a all_runs.FT+orig -b effectsNoInterest+orig -expr 'a-b' -prefix cleanData+orig

This will create a time series with all the drift correction and motion parameter schmutz subtracted out and cast into the pit. What is left over will be signal untainted by motion or drift effects, which we can then use to generate a seed time series.

Before that, however, there is the additional step of warping our data to a template space (such as MNI or Talairach). Assuming you have already warped your anatomical dataset to a template space either manually or using @auto_tlrc, and assuming that there is already good alignment between your anatomical and functional data, you can use the command adwarp:

adwarp -apar anat_final.FT+tlrc -dpar cleanData+orig -dxyz 3 -prefix EPI_subj_01

(Note: There are better ways to do warping, such as the warping option built into align_epi_anat.py (which calls upon @auto_tlrc), but for pedagogical purposes we will stick with adwarp.)

Once this is done, all we need to do is select a seed region for our functional connectivity analysis. This could be a single voxel, a region of interest that averages over an entire area of voxels, or a region selected based on a contrast or an anatomical landmark; it depends on your research question, or whether you want to mess around and do some exploratory analyses. For now, we will use a single voxel using XYZ coordinates centered on the left motor cortex (note that, in this coordinate system, left is positive and posterior is positive):

3dmaskdump -noijk -dbox 20 19 53 EPI_subj_01+tlrc > ideal_ts.1D

Because 3dmaskdump outputs a single row, we will need to transpose this into a column:

1dtranspose ideal_ts.1D LeftMotorIdeal.1D

After you have output your ideal timeseries, open up the AFNI viewer and make sure that what was output matches up with the actual timeseries. In this example, set EPI_subj_01+tlrc as an underlay and right click in the slices viewer to jump to XYZ coordinates of 20, 19, 53; open up the timeseries graph as well, and scroll to the first time point. Does the value there match up with the first entry of your ideal time file? If not, you may have mis-specified the voxel you wanted, or you are using a different XYZ grid than is displayed in the upper left corner of the AFNI viewer. If you run into this situation, consult your local AFNI guru; if you don't have one of those, well...

In any case, we now have all the ingredients for the functional connectivity analysis, and we're ready to pull the trigger. Assemble your materials using 3dfim+ (which, as far as I can understand, stands for "functional intensity map," or something like that):

3dfim+ -polort 3 -input EPI_subj_01+tlrc -ideal_file LeftMotorIdeal.1D -out Correlation -bucket Corr_subj01

You will now have a dataset, Corr_subj_01+tlrc, which is a functional connectivity map between your seed region and the rest of the brain. Note that the threshold slider will now be in R values, instead of beta coefficients or t-statistics; therefore, make sure the order of magnitude button (next to the '**' below the slider) is 0, to restrict the range between +1 and -1.

The last step is to convert these correlation values into z-scores, to reduce skewness and make the distribution more normal. This is done through Fisher's R-to-Z transformation, given by the formula z =  (1/2) * ln((1+r)/(1-r)). Using 3dcalc, we can apply this transformation:

3dcalc -a Corr_subj01+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj01_Z+tlrc

Those Z-maps can then be entered into a second-level design just like any other, using 3dttest+ or some other variation.





Sanity check: When viewing the correlation maps, jump to your seed region and observe the r-value. It should be 1, which makes sense given that this was the region we were correlating with every other part of the brain, and correlating it with itself should be perfect.

Like me.