AFNI's uber_subject.py



Back in the good old days, we would create our scripts ex nihilo; out of nothingness would we construct gigantic, Babel-esque scripts that nobody - to be honest, not even we - could understand. We would pore over FMRI textbooks and fumble around with commands and tools we thought were germane, only to have everything collapse all around us when it came time for execution. I remember with painful clarity the moment when I finally hit upon the idea of looping over subjects for each iteration of the script; I thought I was a creative genius.

Lawless, ruthless, and terrifying; those days were like the wild west. Nobody knew what the hell was going on; you might come across a block of analysis script posted by some group in Singapore, compare it to your own, and wonder how your two labs could ever come to the same conclusion about anything, given how radically different your scripts were. Your script would call for slice timing correction, followed by coregistration and normalization, while their script would call for a cup of chopped onions and a clove of chopped garlic. Then, slowly, you would realize that what you were looking at was a recipe for chicken cacciatore or something, and you would feel like an idiot. Overall, those days were not good.

Fortunately for us, these days we now have a script called uber_subject.py, which takes care of generating analysis scripts quickly and easily. AFNI script ex machina, as it were. If you have programs and binaries from the past couple of years or so (and there's no reason you shouldn't; if you haven't updated in a while, a quick '@update.afni.binaries -d', without the quotes, should do the trick), you will have uber_subject.py. If you type it from the command line - and your python libraries are current and functional (see here for a message board thread if you have trouble with this) - then a graphical user interface will pop up, prompting you to input parameters such as smoothing kernel size, number of regressors, relationship status, and so forth, until you have a completely idiosyncratic script to fit your needs. Overall it has worked very well for me so far, and word is that it will be integrated with an even higher level script called uber_script.py. I've had some issues getting it to work, so instead of trying to fix it, I have taken the path of least resistance and settled for uber_subject.py. You will be glad that you did as well.


AFNI Command of the Week: 3dinfo

Researchers are always trying to find out more about their data. They examine at it from different angles; place it in their hand and feel its texture and test its heft; and look closely for portents and signs and the apocalypse.

3dinfo, similar to FSL's fslinfo and SPM's spm_vol, returns critical information about an FMRI dataset, such as the number of voxels along the x-, y-, and z-directions, the size of those voxels, and other header essentials, such as the number of volumes and the length of the repetition time (TR). This information is critical when performing steps such as slice timing correction with 3dTshift, when the researcher may want to know more about the number of slices and the acquisition of those slices, or when doing a step like cluster correction, where the voxel dimensions are a critical piece of information.

A few lesser known options include the -VERB option (in all caps), which generates even more information than the typical -verb option, and -echo_edu, which formats the standard output into a clear and easy-to-read table. This and more can be found in the following video:



Super Useful Sampling Distributions Applet


Similar to the applets I used for my P211 research methods class, there is an online program which allows the user to specify a population distribution, and then build a sampling distribution of statistics such as mean, median, and variance. When I was first starting out I had a difficult time grasping what exactly a sampling distribution was, or what it meant, exactly; but tools like this are great for visualizing the process and building an intuition about what's really going on. The result is, I still don't understand it - like, at all - but I sure as hell feel more confident. And that's what is really important.


On Vacation



I will be gone for the next ten days, making a brief layover in Chicago to see some friends, and then traveling back to the frigid northern wastelands of Minnesota to my old stamping grounds of Wayzata (and believe me, I used to stamp the living heck out of those grounds). As a result, there will be fewer posts - although I do have a couple of drafts at hand to sprinkle around like little orts, just to keep you all happy and coming back for more.

But seriously, I want to let you all know that I do not take anyone's readership here for granted. After the initial wave of inflated traffic statistics from bots and pings from Russian adult-oriented websites, I have seen that there is a good chunk of you that - for whatever reason - keeps returning; and so, I thank you. I will be sure to reward your loyalty by putting up Google ads soon, as they are targeted toward your desires, and so will make the experience more enjoyable. You think I am joking, but I am not.

But seriously - for real, this time - each time I get a comment about how this blog has helped someone out with some aspect of FMRI, or statistics, or their romantic life, it all seems worth it. I can't tell you how many times I've thought about hanging it all up and quitting (actually, I can: Twice), but for someone posting a comment at the last minute asking how the hell I could be so stupid as to write that optseq optimizes designs according to beta estimates as opposed to contrast efficiency; upon which I become pissy and defensive, and continue to post out of sheer spite. So to all of you who read and contribute - Thank You! Here is a reward for you all:


Neurosynth: FMRI Meta-Analysis Made Fun

A sample screenshot from Neurosynth

Have you ever sat in front of your computer, watching Step by Step reruns on Youtube while mindlessly munching Cheetos and wondering how you could instantaneously generate meta-analysis maps for neuroimaging data? If so, then Neurosynth is the tool for you. Developed primarily by researcher and nerd baller Tal Yarkoni, Neurosynth allows the user to generate meta-analysis maps on the fly for virtually any neuroimaging-related keyword that you can imagine, and possibly even the ones that you wouldn't even think to imagine. (For example, check out the topic words under Y: year, yellow, yielded (!), you, young, younger.)

Neurosynth is intuitive and approachable: Simply type in a keyword (such as "pain" or "cognitive control"), and it immediately generates a statistical map associated with that keyword. Different options include forward inference, or the consistency of that activation across studies, and reverse inference, or the probability that you would observe the reporting of a term given a particular location. These maps can then be output into NIFTI format, where they can be used for a variety of purposes, such as ROI masks. If you're particularly driven, and your mind not bemazed from Cheetos overdose, I am sure that you can find several other uses, as described on the website's FAQ. Other data from these maps include the studies that went into the meta-analysis, and even author names can be used as keywords; in this way, Neurosynth also serves as an excellent education tool for learning what specific regions are associated with which cognitive processes and which author's work.

In any case, I am always pleased when I come across tools like this, and I'm hoping that the authors continue to refine and expand this program; I think it will become increasingly useful in the future as more neuroimaging data is shared online. And as important as the future is, I should hasten to add that we must also look back to the past as well - from Mozart's Don Giovanni to Burton's The Anatomy of Melancholy to Step by Step - in order to better understand ourselves and our rich heritage as human beings, and realize both from whence we have come, and, I hope, our future greatness.


Andy's Brain Blog Book Club: Lolita

The one non-scandalous image of Lolita I could find

Since its publication over five decades ago, Lolita - one of Nabokov's several masterpieces, and arguably his best - has continued to provoke emotions ranging from awe and admiration to shock and outrage. Indeed, it is difficult to come to grips with the fact that one of the most brilliant examples of English prose should center upon such a sordid subject; to paraphrase a line from the book itself, even the most jaded voyeur would pay a small fortune to see the acts described within those intolerably vivid pages.

You will not be able to tolerate Nabokov at all unless you realize that he is not putting forth a message, or a moral, or using symbolism at any point to convey some deeper meaning. As he writes in his afterword - and we have no reason to doubt his sincerity - the entire point is aesthetic pleasure; to experiment with the rhythm and sonorities and cadence of the language and make it as pleasing as possible to the inner ear. This last point may strike some as odd, as Nabokov deliberately employs a rarefied, sophisticated style involving recondite vocabulary (see, now there I go) copiously interlarded with French turns of phrase (your humble blogger admits to not knowing a lick of French, once responding Trois bien to a French cellist's Ã‡a va). Aside from the conflicting feelings aroused by Humbert's mind (at times achingly beautiful; at others, horribly squalid), many readers find the language itself to be an obstacle; two or three trips to the dictionary per page is not uncommon.

However, this need not deter you; for the first reading, I recommend paying little attention to the words and French you do not understand, and simply immerse yourself in the lyrical, shocking, roller-coaster prose. As you will soon realize (to your delight, I hope), Nabokov has an uncanny gift for constructing sentences and coining words that stick with you long after you have put the book down. After the first reading I could still see inly, projecting onto the silky screen of my retina and vibrating along my optic nerve, some of those odd, charming, gorgeous phrases: Lo-lee-ta; the biscuity odor of his Annabel Lee; nightmarish curlicues; winged gentlemen of the jury; limbless monsters of pain; the bubble of hot poison in one's loins; dim rays of hope before the ultimate sunburst; clawing at each other under the water; a list of names of children enrolled in Lolita's school (Irving Flashman, Viola Miranda, Agnes Sheridan, et alia); purple pills made of summer skies and plums and figs and the grapeblood of emperors; aurochs and angels; Lolita playing tennis; truck taillights gleaming like carbuncles; coffins of coarse female flesh within which nymphets are buried alive; the exquisite caloricity of Lolita's fevered body; the soft crepitation of  flowers; Humbert dehiscing one of Lolita's infected bugbites and gorging himself on her spicy blood; Will Brown, Dolores, Co.; icebergs in paradise; guilty of killing Quilty; drunk on the impossible past; Humbert looking and looking at Lolita - older, married, pregnant, eyes faded to myopic fish, nipples swollen and cracked, her young velvety delicate delta tainted and torn - and knowing that he loves her more than anything he had ever seen or imagined on earth, or hoped for anywhere else.

Most of these I can still recall perfectly from memory; only a few of them did I go back to doublecheck, if not to verify the accuracy of my recollection, then to savor their rereading. (Only someone like Nabokov could have dreamed up something as twisted as Humbert attempting to get parenting advice from a book called Know Your Own Daughter.) These sentences and scenes serve as the nodes and nerves of the novel, checkpoints and touchstones scattered amongst interstitial words and prose for any reader curious or sensitive enough to detect them; and each reader will discover his own words and gems that resonate.

A final note: If you have already read Lolita, reread it. The Foreword, the novel itself, and the Afterword (included, I believe, in all editions after 1955) are rich in literary jokes and self-referential allusions that reward careful rereading, and contain details that, while nearly impossible to detect upon a first reading, enhance the experience after you already know the denouement.


AFNI Command of the Week: 3dZcutup and 3dZcat

When I used to work at OSU, on the lab wiki I would put up a new AFNI command every week, detailing a program that isn't necessarily used all that often, but has some interesting applications for the user looking for more ways to manipulate their data. I plan to do the same on this blog, in the hopes that someone might find them useful.

One such tool that came to my attention a couple of weeks ago was 3dZcutup, a program for taking apart individual slice or groups of slices, in order to rearrange them or, more commonly, to perform statistical analyses on only one slice at a time, if computer memory becomes an issue. The usage is simple: Supply an input dataset, a prefix for your output dataset, and specify the range of slices you want to dump into the output dataset. For example, say you have a functional dataset r01+orig with 35 slices in the z-direction; if you wish to output only the first half of the slices into one dataset and the second half of the slices into another dataset, you could do something like the following:

3dZcutup -prefix bottomHalf -keep 0 16 r01+orig
3dZcutup -prefix topHalf -keep 17 34 r01+orig


Recall that the slices start at slice 0, which is why the last slice in this dataset is labeled 34. The output datasets for these commands would look something like this:

TopHalf

BottomHalf


In order to rearrange these slices, either to recreate the original dataset or to inverse the slices, you can collate the slices with the complement to 3dZcutup, 3dZcat:

3dZcat -prefix rightDirection bottomHalf topHalf
3dZcat -prefix wrong Direction topHalf bottomHalf

RightDirection

WrongDirection

A more useful application of 3dZcutup and 3dZcutup is during the stage of 3dDeconvolve, where each slice (or group of slices) can be run through 3dDeconvolve, and then stacked together to create the complete statistical dataset (the following is copied from the help file of 3dZcutup, since it is the better than any example I could come up with):

  foreach sl ( `count -dig 2 0 20` )
    3dZcutup -prefix zcut${sl} -keep $sl $sl epi07+orig

    # Analyze this slice with 3dDeconvolve separately

    3dDeconvolve -input zcut${sl}+orig.HEAD            \
                 -num_stimts 3                         \
                 -stim_file 1 ann_response_07.1D       \
                 -stim_file 2 antiann_response_07.1D   \
                 -stim_file 3 righthand_response_07.1D \
                 -stim_label 1 annulus                 \
                 -stim_label 2 antiann                 \
                 -stim_label 3 motor                   \
                 -stim_minlag 1 0  -stim_maxlag 1 0    \
                 -stim_minlag 2 0  -stim_maxlag 2 0    \
                 -stim_minlag 3 0  -stim_maxlag 3 0    \
                 -fitts zcut${sl}_fitts                \
                 -fout -bucket zcut${sl}_stats
  end

  # Assemble slicewise outputs into final datasets

  time 3dZcat -verb -prefix zc07a_fitts zcut??_fitts+orig.HEAD
  time 3dZcat -verb -prefix zc07a_stats zcut??_stats+orig.HEAD


What this will do is loop over twenty slices and perform 3dDeconvolve on each slice separately, and then reassemble both the fitts and stats datasets from all of the individual slices after they have been analyzed. This can help when the dataset is either extremely large, or your computer has relatively little memory.


Thanks to alert reader Landoska, who once cut his FMRI data into four slices instead of eight, because he wasn't hungry enough for eight slices. (rimshot)


Top Ten Tips for Graduates Teaching Undergraduates

This past week I finished teaching a research methods class, a mandatory course for psychology majors. We covered a wide range of topics, including clinical treatments, the Stroop effect, and the Implicit Associations Test, with a focus on having the students design their own experiments, gather some data, and analyze the results. In all, it was a good experience, but it also presented several challenges, including four medium-length papers (about 10-14 pages on average for each one) spread across forty or so students. There might be a some English or Philosophy professors who will get a hearty guffaw out of this ("You think that's a lot of papers to read? Let me show you, boy!"), but for me, it was quite an adjustment.

Most graduate students will be called upon to teach at some point during their PhD career, and rightly so; in addition to inuring yourself to mindless drudgery and incessant complaints, teaching helps you to hone your public speaking skills and how you interact with an audience. Think of it as having benefits across a wide range of areas: Speaking, effectively dealing with complaints, and making yourself engaging and presentable. You will get far more out of it if you see it as an opportunity to improve your marketability.

That being said, teaching can be at times frustrating and challenging; however, there are several ways to make the experience less painful, more efficient, and maybe even enjoyable. The following is a list of rules and procedures I put in place to protect myself; some of them I got from previous teachers, while some of them I picked up along the way:

  1. Make your syllabus clear. Students are ingenious at finding loopholes and will exploit them if they can. (Just think back to when you were an undergraduate; wouldn't you do the same thing?) Think of your syllabus as a contract with the class; the more detailed and clearer you are, the less wiggle room there is to abuse the system.
  2. Set strict deadlines for turning in drafts of papers. My policy was to look at only one draft at least seventy-two hours before the paper deadline; likewise, students had only one week after receiving their grades to schedule a meeting to discuss their paper. One important policy I put in the syllabus was that, if students requested a meeting to contest their grade, I would regrade the entire paper; their final grade could go either way. Over the whole semester, not one student contested their paper grade. Then again, I am also an unstable and terrifying person.
  3. If you can, request electronic drafts and grade those. There may be some who like grading by hand, which is fine; however, grading electronic copies allows you to more easily store a copy of their graded papers (with comments) on a hard disk for future reference.
  4. Establish your superiority on the first day by asking a brainteaser, such as "What do you put into a toaster?" Most of the students will answer "Toast," when the answer is actually "Bread". This will severely demoralize them, and make them unwilling to challenge your authority.
  5. For God's sake, don't get your stones wound up too tight over grammatical errors like than/then and effect/affect. I can't tell you how many times I've heard a colleague say "Can you believe what this student wrote? It says here: 'My experimental manipulation effected the results'. How am I supposed to know what they mean?" It's pretty simple: They meant "affected". Most students are not clever enough to construct a sentence the other way around. I've seen some pretty horrific, sometimes humorous, butcherings of the English language, and this is a comparatively mild offense.
  6. Be patient. Sometimes you will be shocked by the kinds of mistakes the students make, and it will bewilder you how some of them appear to keep missing the point when you feel that you stated it so clearly. Sometimes they really just don't get it, and they might not still not get it even by the end of the semester. Sometimes it's because of you, and you really just don't explain some things very well, no matter what you think or what your colleagues tell you. You might think that if you were in their position, you would pick these things up quicker, because you're smarter, more motivated, and - dammit - you try! But, just to put things in perspective, you should also recall that there are some things that you are still laughably, ridiculously bad at - maybe mathematics, or music, or thawing food in the microwave - no matter how much work you put into it.
  7. Spend as much time as you need to grade, and no more. Honestly ask yourself: How many students will really look at the comments? Not many, and those that do, won't care that much. Use the comments more as an anchor for addressing concerns if students have questions about the grade they received; the comment will help you remember where they screwed up, and help you address it effectively. This is not a recommendation to slack off about grading; rather, realize that you can quickly enter a point of diminishing returns with the amount of detail in your feedback.
  8. Have fun. We all want our teachers to be fun and engaging; if you come in with a terrible attitude, the students will mentally check out. They might mentally check out no matter what, but as long as you're having fun, at least you don't have to suffer.
  9. Watch Saved by the Bell reruns. In addition to being an excellent TV show, Saved by the Bell will make you familiar with the archetypal students that you will encounter in your class: Zach, the preppy one; Slater, the jock; Screech, the nerd; and Kelly, the popular girl. The show will teach you how each one operates, and will allow you to deal with them accordingly. In addition, you will have a leg up on knowing all of the potential pranks and shenanigans they will try to pull on you, such as when Zack puts his clothes on the skeleton from anatomy class to hide his absence.
  10. Appreciate the good things that happen. Everyone complains about the bad things that happen to them that they don't deserve; few people take as much notice of the good things that happen to them that they also don't deserve. Some students will surprise you with their enthusiasm and insight, and genuinely want to learn more about the subject. Be grateful when you get students like this.
Those are my recommendations for how to approach a class, especially if you are teaching it for the first time. Above all, continually ask yourself whether this is something that you are interested in doing; some people find that they have a knack for it, and will find a teaching career a rewarding and enjoyable experience. Likewise, if you absolutely cannot stand it, also take note of that, and plan accordingly; there are few things more depressing than a man continuing to do a job he abominates.



Contrasts in SPM (with Outtakes!)

We have come to the end of the preprocessing pipeline, and lurch across the finish line with a discussion of contrasts. Often researchers will calculate the difference in beta estimates between two conditions (in SPM, the beta_000?.img files), and also determine whether the difference is significant or not. At the single-subject level both the magnitude of the beta estimate and the variance of the estimate is calculated for each condition, and then t-tests can be performed on these beta estimates by weighting them. For example, the contrast of [1 -1] for Left vs. Right button presses will subtract the beta estimates for the Right button presses from the Left button presses, similar to a paired t-test. A t-statistic is then calculated at each voxel using the following formula:


Where gamma represents the contrast vector (in this example, [1 -1]) and B-hat represents the beta estimates for each condition. The degrees of freedom for a single-subject analysis is based on the number of time points; although, since nearby timepoints share a high degree of correlation, the actual degrees of freedom is pared down to compensate. With most standard processing streams, the variance associated with a beta estimate is discarded when carried to a higher-level analysis, although programs such as FSL's FLAME and AFNI's 3dMEMA take this variance into account when weighting group-level estimates.

Details about how to perform a simple t-contrast in SPM are shown in the following video. The first twenty seconds or so is an outtake where my microphone fell over; we sure like to have fun around here!



Insights into the Vegetative State Using Humor


A few days ago I stumbled upon an article talking about Adrian Owen's work with vegetative state patients, and in an instant I was mentally transported to my sophomore year of college, when I first read one of his papers. I remember it like it was just yesterday; an unseasonably warm and humid May afternoon in my cognitive psychology class; just outside the window, you could hear the whine of the midges intermingling with the screams of children, and everywhere the Minnesota foliage was pullulating into life, those emerald prairies and celadon canopies of the Arboretum soaking up as much water and air and oxygen as they required. And in front of us at the head of the classroom stood imperious Professor Brockton, his left hand bepurpled with a chemical burn from an unknown wetlab incident, that crazed stain traveling up his palm and disappearing within the cuff of his neatly pressed Stafford shirt, leaving us to wonder exactly how far it went before terminating.

But above all, I remember discussing in class that day how neuroimaging had provided some evidence that patients supposedly in comas and vegetative states could still process information from the outside world, such as being asked to imagine playing tennis, which, to me, was astonishing. It was at that moment I had an epiphany and realized what I wanted to do with my life; I wanted to be - a professional tennis player.

No, wait! I meant, a cognitive neuroscientist. Kind of like a regular neuroscientist, except with an additional term to set us apart and let everyone know how special we are.

In any case, Owen has gotten a lot of press in the past few years conducting these types of experiments on people in vegetative states. Specifically, he uses paradigms where he scans individuals while asking them to imagine doing different tasks, such as playing tennis, going around different rooms of their house, and neuroscience blogging. The results were striking: subjects in a vegetative state, who otherwise have no way of communicating with anyone else, showed similar patterns of brain activity to healthy controls who imagined the same scenarios, suggesting that they actually could understand what was going on around them, even though they couldn't talk or move their limbs. A similar procedure was then used to ask yes/no questions to the patients, and see whether they could respond by selectively increasing blood flow to certain regions of the brain through thinking about specific things; and now, the next obvious step - at least in my mind - is to use this to figure out which part of the patient's body is itchy. (Seriously, think about it; you talk about helping people, this is where you start.)

More recently, Owen has investigated whether these same subjects are able to understand and appreciate humor. For example, he scanned the subjects while presenting them with humor - puns, wordplay, reading Andy's Brain Blog - and observed whether the patients responded similarly to how normally functioning individuals process humor. Elevated levels of activity were found in the frontal lobes and limbic system the funnier the joke was; and once you start throwing around terms like "limbic system", you know it's gotta be true.