Bayesian Approaches to fMRI: Thoughts

Pictured: Reverend Thomas Bayes, Creator of Bayes' Theorem and Nerd Baller

This summer I have been diligently writing up my qualification exam questions, which will effect my entry into dissertation writing. As part of the qualification exam process I opted to perform a literature review on Bayesian approaches to fMRI, with a focus on spatial priors and parameter estimation at the voxel level. This necessarily included a thorough review of the background of Bayesian inference, over the course of which I gradually became converted to the view that Bayesian inference was, indeed, more useful and more sophisticated than traditional null hypothesis significance testing (NHST) techniques, and that therefore every serious scientist should adopt it as his statistical standard.

At first, I tended to regard practitioners of Bayesian inference as seeming oddities, harmless lunatics so convinced of the superiority of their technique as to come across as almost condescending. Like all good proselytizers, Bayesian practitioners appear to be appalled by the putrid sea of misguided statistical inference in which their entire field had foundered, regarding their benighted colleagues as doomed unless they were injected with the appropriate Bayesian vaccine. And in no place was this zeal more evident than their continual attempts to sap the underlying foundations of NHST and the assumptions on which it rested. At the time I considered these differences between the two approaches to be trivial, mostly because I convinced myself that any overwhelmingly large effect size acquired in NHST would be essentially equivalent to a parameter estimate calculated by the Bayesian approach.

Bayesian Superciliousness Expressed through Ironic T-shirt

However, the more I wrote, the more I thought to myself that proponents of Bayesian methods may be on to something. It finally began to dawn on me that rejecting the null hypothesis in favor of an alternative hypothesis, and actually being able to say something substantive about the alternative hypothesis itself and compare it with a range of other models, are two very different things. Consider the researcher attempting to make the case that shining light in someone's eyes produces activation in the visual cortex. (Also consider the fact that doing such a study in the good old days would get you a paper into Science, and despair.) The null hypothesis is that shining light into someone's eyes should produce no activation. The experiment is carried out, and a significant 1.0% signal change is observed in the visual cortex, with a confidence interval from [0.95, 1.05]. The null hypothesis is rejected and you accept the alternative hypothesis that shining light in someone's eyes elicits greater neural activity in this area than do periods of utter and complete darkness. So far, so good.

Then, suddenly, one of these harmless Bayesian lunatics pops out of the bushes and points out that, although a parameter value has been estimated and a confidence interval calculated stating what range of values would not be rejected by a two-tailed significance test, little has been said about the credibility of your parameter estimate. Furthermore, nothing has been said at all about the credibility of the alternative hypothesis, and how much more believable it should be as compared to the null hypothesis. These words shock you so deeply that you accidentally knock over a nearby jar of Nutella, creating a delicious mess all over your desk and reminding you that you really should screw the cap back on when you are done eating.

Bayesian inference allows the researcher to do all of the above mentioned in the previous paragraph, and more. First, it has the advantage of being uninfluenced by the intentions of the experimenter, the knowledge of which is inherently murky and unclear, but on which NHST "critical" values are based. (More on this aspect of Bayesian inference, as compared to NHST, can be found in a much more detailed post here.) Moreover, Bayesian analysis sheds light on concepts common to both Bayesian and NHST approaches while pointing out the disadvantages of the latter and outlining how these deficiencies are addressed and mitigated in the former, whereas the converse approach is not true; this stems from the fact that Bayesian inference is more mathematically and conceptually coherent, providing a single posterior distribution for each parameter and model estimate without falling back on faulty, overly conservative multiple correction mechanisms which punish scientific curiosity. Lastly, Bayesian inference is more intuitive. We should intuitively expect our prior beliefs to influence our interpretation of posterior estimates, as more extraordinary claims should require correspondingly extraordinary evidence.

Having listened to this rhapsody on the virtues and advantages of going Bayesian, the reader may wonder how many Bayesian tests I have ever performed on my own neuroimaging data. The answer is: None.

Why is this? First of all, considering the fact that a typical fMRI dataset is comprised of hundreds of thousands of voxels, and given current computational capacity, Bayesian inference for an single neuroimaging session can take prohibitively long amounts of time. Furthermore, the only fMRI analysis package I know of that allows for Markov-Chain Monte Carlo (MCMC) sampling at each voxel is FSL's FLAME 1+2, although this procedure can take on the order of days for a single subject, and the results usually tend to be more or less equal to what would be produced through traditional methods. Add on top of this models which combine several levels of priors and hyperparameters which mutually constrain each other, and the computational cost increases even more exponentially. One neuroimaging technique which uses Bayesian inference in the form of spatial priors in order to anatomically constrain the strength and direction of connectivity - an approach known as dynamic causal modeling (DCM; Friston et al, 2003) - is relatively unused among the neuroimaging community, given the complexity of the approach (at least, outside of Friston's group). Because of these reasons, Bayesian inference has not gained much traction in the neuroimaging literature.

However, some statistical packages do allow for the implementation of Bayesian-esque concepts, such as mutually constraining parameter estimates through a process known as shrinkage. While some Bayesian adherents may balk at such weak-willed, namby-pamby compromises, in my experience these compromises can satisfy both the some of the intuitive concepts of Bayesian methods while allowing for more efficient computation time. One example is AFNI's 3dMEMA, which estimates the precision of the estimate for each subject (i.e., the inverse of the variance of that individual's parameter estimate), and weights it in proportion to its precision. For example, a subject with less variance would be weighted more when taken to a group-level analysis, while a subject with a noisy parameter estimate would be weighted less.

Overall, while comprehensive Bayesian inference at the voxel level would be ideal, for right now it appears impractical. Some may take issue with this, but until further technological advances in computer speed or clever methods which allow for more efficient Bayesian inference, current approaches will likely continue.