Neuroimaging Training Program: Days 1 & 2

The first two days of NiTP have been intense - MRI physics, neurobiology, experimental design, plus much eating out, have all been crammed down our faces, only to be slowly digested over the next two weeks to form the hard bolus of wisdom, and then regurgitated back onto our colleagues when we return home. (Maybe not the best metaphor, but I'm sticking with it.)

Much of the lectures were mostly review, but useful review, and delivered by an array of brilliant scientists who, if they chose to, could easily be doing something much more sinister with their intellectual powers, such as creating a race of giant acid-spewing crabs to paralyze the world in fear. I'm sure the thought has passed through their minds at some point. Fortunately for us, however, they are content to devote their energies to progress the field of neuroimaging. And while you can find their slides and audio lectures online here (plus a livestream over the next couple of weeks here), I'll try my best to intermittently summarize what we've done so far. This is mainly a brief information dump; some of these I'll try to develop upon once I get back to New Haven.


  • After a brief introduction and overview by MR physicist Mark Cohen, we then listened to a keynote speech by Russ Poldrack, who told us the various ills and pitfalls of neuroimaging and cognitive neuroscience, including inflated effect sizes, poor reproducibility, and how shoddy experimental design leads to ever-withering claims of neophrenology. We each mentally swore to never again engage in such scurrilous practices, while continuing to have the nagging feeling somewhere in the back of our mind, that we'd compromise at some point. It's like telling a man not to use his fingers to scrape the last streaks of Nutella from the bottom of the jar; you can't ask the impossible all the time.
  • Next up was a refresher on neurons, neurobiology, and the Blood Oxygenation Level Dependent (BOLD) response. With hundreds of billions of tiny neurons crammed inside our cranium, along with a complex network of glia, dendrites, synapses, and vesicles, it's a miracle that the thing works at all. Couple this with an incredibly quick electrical and chemical process generating action potentials and intricate relationships between metabolic function of the cell and hemodynamics delivering and shuttling blood to and from activation sites, and you begin to question whether some of the assumptions of FMRI are all that robust - whether it truly measures what we thinks it's measuring, or just some epiphenomena of neural activity steps removed from the actual source.
  • But we all have to keep that grant money flowing somehow, which is where experimental design comes in, smoothly eliding over all those technical concerns with a sexy research question involving consciousness, social interaction, or the ever-elusive grandmother neuron. However, no research question is immune to sloppy design, or asking ourselves whether the same question can be answered much more easily, and much more cheaply using a behavioral paradigm. Once you have a good neuroimaging research question, however, you also need to question several of the assumptions going into the design; such as whether the assumption of pure insertion holds - whether adding in another cognitive process leads to activity only sensitive to that process, without any undesired interactions - and potential stimuli confounds.
  • Lastly, we covered data preprocessing and quality control, in particular the vicissitudes of head motion and why humans are so stubborn in doing things like moving, breathing, making their hearts beat, and other things which are huge headaches for the typical neuroimager to deal with. We're not asking for much here, guys! Several of these issues can be resolved either by excluding acquisitions contaminated by motion or other sources of intrinsic noise, or, more commonly, modeling them so that any variance gets assigned to them and not to any regressors that you care about. Another related topic was using a Matlab function coded by Martin Monti to assess any multicollinearity in your design, which I plan to cover in detail in a future post. You can find the code on the NiTP website.
  • Oh, and k-space. We talked about k-space. I've encountered this thing off and on for about seven years now, and still don't completely understand it; whenever I feel as though I'm on edge of a breakthrough to understand it, it continues to elude me. Which leads me to conclude that either, a) I just don't understand it, or b) nobody else understands it either and it's really meaningless, but enough people have invested enough into it to keep up the charade that it's continued to be presented as a necessary but abstruse concept. For the sake of my self-esteem, I tend to believe option b.
That's about it! I'll plan on posting a couple more updates throughout the week to keep everyone abreast of what's going on. Again, check out the livestream; you're seeing and hearing the same things I am!

FSL Summary

After finishing that cycle of tutorials, I feel as though a summary would be useful. Some of these are points that were highlighted during the walkthroughs, whereas others are germane to any neuroimaging experiment and not necessarily specific to any software analysis package. The past couple of weeks have only scratched the surface as to the many different experimental approaches and analysis techniques that are available, and the number of ways to gather and interpret a set of data are nearly inexhaustible.

That being said, here are some of the main points of FSL:

1) These guys really, really like acronyms. If this pisses you off, if you find it more distracting than useful, I'm sorry.

2) Download a conversion package such as dcm2nii (part of Chris Rorden's mricron package here) in order to convert your data into nifti format. Experiment with the different suffix options in order to generate images that are interpretable and easy to read.

3) Use BET to skull strip your anatomicals as your first step. There is an option for using BET within the FEAT interface; however, this is for your functional images, not your anatomical scans. Skull stripping is necessary for more accurate coregistration and normalization, or warping your images to a standardized space.

4) As opposed to other analysis packages, FSL considers each individual run to be a first-level analysis; an individual subject comprising several runs to be a second-level analysis; collapsing across subjects to be a third-level analysis; and so forth. I recommend using the FEAT interface to produce a template for how you will analyze each run (and, later, each subject), before proceeding to batch your analyses. Especially for the beginner, using a graphical interface is instructive and helps you to comprehend how each step relates to the next step in the processing stream; however, once you feel as though you understand enough about the interface, wean yourself off it immediately and proceed to scripting your analyses.

5) Use the Custom 3-column option within the Stats tab of FEAT in order to set up your analysis. Most studies these days are event-related, meaning that events are of relatively short duration, and that the order of presentation is (usually) randomized. Even if your analysis follows the same pattern for each run, it is still a good habit to use and get comfortable with entering in 3-column timing files for your analysis.

6) If your initial attempts at registration and normalization fail, set the coregistration and normalization parameters to full search and maximum degrees of freedom (i.e., 12 DOF). This takes more time, but has fixed every registration problem I have had with FSL.

7) Look at the output from your stats, and make sure they are reasonable. If you have done a robust contrast which should produce reliable activations - such as left button presses minus right button presses - make sure that it is there. If not, this suggests a problem with your timing, or with your images, which is a good thing to catch early. Also look at your design matrix, and make sure that events are lined up when you think they should be lined up; any odd-looking convolutions should be investigated and taken care of.

8) This piece of advice was not covered in the tutorials, nor does it apply to neuroimaging analysis itself exactly, but it bears repeating: Run a behavioral experiment before you scan. Obtaining behavioral effects - such as reaction time differences - are good indicators that there may actually be something going on neurally that is causing the observed effects. The strength and direction of these behavioral differences will allow you to predict and refine your hypotheses about where you might observe activation, and why. Furthermore, behavioral experiments are much cheaper to do than neuroimaging experiments, and can lead to you to make considerable revisions to your experimental paradigm. Running yourself through the experiment will allow you to make a series of commonsense but important judgments, such as: Do I understand how to do this task? Is it too long or too boring, or not long enough? Do I have any subjective feeling about what I think this experiment should elicit? It may drive you insane to pilot your study on yourself each time you make a revision, but good practice, and can save you much time and hassle later.


That's about it. Again, this is targeted mainly toward beginners and students who have only recently entered the field. All I can advise is that you stick with it, take note of how the top scientists run their experiments, and learn how to script your analyses as soon as possible. It can be a pain in the ass to learn, especially if you are new to programming languages, but it will ultimately save you a lot of time. Good luck.