Friday 4 April 2014

Does it matter that we do not know the link between neuronal signals and the BOLD response?

In recent weeks there have been a couple of articles that have revisited the question of the importance (or not) of the results of fMRI studies (http://www.theguardian.com/science/head-quarters/2014/mar/26/brain-imaging-scan-fmri-academic-gimmick and http://www.theguardian.com/science/head-quarters/2014/mar/13/brain-scans-imaging-behaviour-mind). For me the importance of fMRI to address any research question is dependent on the degree to which the hypothesis requires specific predictions about the underlying neuronal signals. In general, the more any hypothesis is dependent upon specific neuronal parameters the less convincing the results and conclusions of that study will be. Currently, we still do not know with any confidence how the BOLD signal in humans is modulated by neuronal firing rate and/or by modulations in the amplitude of the local field potentials at different frequencies. In addition, we have virtually no data that addresses how any relationships between the neuronal measures and the BOLD signal might differ in different brain regions. Indeed Harris et al. (2011) wrote “In particular, BOLD signals need not directly report spiking activity in the imaged area, but instead reflect the many factors associated with neural activity that lead to an increase in blood flow. Most importantly, neurotransmitters released during synaptic activation are now known to directly influence local blood flow and it is thought that the BOLD signal may most closely reflect the excitatory synaptic component, rather than the action potential component, of neural activity”. Therefore, it would seem that it would be prudent to reduce the weight given to fMRI results that purport to reflect changes to specific parameters of the underlying neuronal signal. A good case study that demonstrates the difficulty in relating neuronal signal modulations to BOLD signal modulations is the history of using fMRI to investigate the presence (or not) of mirror neurons in humans. Which is still unresolved.

However, not all fMRI experiments have hypotheses that are based on specific predictions about the underlying neuronal signals. Indeed, it is interesting to note that the examples given in support of fMRI research by Matt Wall (http://www.theguardian.com/science/head-quarters/2014/mar/26/brain-imaging-scan-fmri-academic-gimmick) are examples of such research. Here, the fMRI signal is employed as a biomarker, without any attempt to explain or link any modulations in the BOLD signal to specific neuronal parameters. So, does it matter that we do not know the link between neuronal signals and the BOLD response? I would say – it depends. It depends on whether your hypothesis makes specific predictions about the underlying neuronal signal or not. If it does then it clearly does matter that the link between neuronal signals and the BOLD response is not known. If not, then it does not matter.


What ever your thoughts this paper is well worth a read.

Harris JJ, Reynell C, Attwell D. (2011) The physiology of developmental changes in BOLD functional imaging signals. Dev Cogn Neurosci. 1(3):199-216. http://goo.gl/wGYqlX

Tuesday 18 February 2014

A note of caution when selecting ROIs from orthogonal contrasts

In neuroimaging, fMRI, EEG and MEG, it is quite common to circumvent the requirement to correct for multiple comparisons by reducing the multidimensional data to a single value per subject per condition by selecting a region/electrodes/time window of interest. Previously, I have shown that the results of statistical tests are biased if this is based on where any effect is maximal (http://www.ncbi.nlm.nih.gov/pubmed/23639379). However, it is common practise to select these region/electrodes/time window of interest from an orthogonal contrast to the one of interest in a factorial design. In other words, if we have a 2X2 design with factors A and B we might select our region/electrodes/time window of interest where the main effect has the highest statistical difference and then test whether the interaction is significant, averaging across this region/electrodes/time window of interest. However, is this biased? The answer is maybe.  What follows is covered in "Circular analysis in systems neuroscience: the dangers of double dipping" by Kriegeskorte et al. (2009) http://www.ncbi.nlm.nih.gov/pubmed/19396166.

To test this I simulated data from the 4 cells of a 2X2 factorial design, A1, A2 B1 and B2, such that the data for each trial and each subject and each time point was drawn from the same normally distributed random population. I then selected either the time point where either (i) the main effect of A vs B was maximal or (ii) randomly selected a time point. Then for this time point I tested whether the interaction between A and B was significant. This was repeated 500 times and I calculated the percentage of false positives. At a significance level of 0.05 we should expect false positives 5% of the time. This is precisely what was produced. Irrespective of how the time point was selected the percentage of false positives was 5% (see left hand panel of the figure below). Therefore, choosing a region/electrodes/time window of interest from an orthogonal contrast is not biased? Well the answer is yes but only if all the cells have the same variance. If I rerun the same simulation but now I reduce the variance of A1 by a factor of 10, keeping the means of all the cells the same, equal to zero, I get a very different result. Now the statistical test for the interaction is biased with over twice as many false positives as predicted (see right hand panel of figure below)

This simulation assumed that cell A1 had a different variance to cells A2, B1 and B2. If I now rerun the simulation but now assume that all cells have unequal variances by dividing the variance of A1 by 1, A2 by 2, B1 by 3 and B2 by 4 then the proportion of false positives rises above 50%.



.


Thursday 26 September 2013

A word of caution about testing for U-shaped functions

It is not uncommon in many domains of neuroscience, in particular in fMRI,  to design an experiment to test whether neural activity has a U-shaped response to a parametrically varied stimulus. The analysis to test for this would seem rather trivial. Most commonly researchers simply including a quadratic, or U-shaped term, as a regressor in their design matrix, X, at the first level. They then test whether the beta value for this regressor is significant at the second level. Unsurprisingly, if the data does have a U-shaped function then this method will produce a significant result. So far so obvious. However, there are plenty of other functions that will also produce a significant result with this test even if the data do not have a U-shaped function. Perhaps the most obvious is if the data, Y, is an exponential function of the stimulus range tested. It is easy to show that in simulations that such data will also return a significant result at the second level, even though the data is not a U-shaped function of the stimuli tested. Unfortunately, it is common for studies to claim to have shown a U-shaped relationship because they have shown that the data is correlated with a quadratic regressor. In other words, simply because a U-shaped regressor is significant does not mean the data is actually a U-shaped function of the stimulus. One way to test whether data is truly U-shaped rather than exponential is to test whether the tertiary term, X.^3, is also significant. If the data is an exponential function of the stimulus then it should be significant if it is truly U-shaped then this term should not be significant. If you are interested you can read how we used this logic in this paper with Dr Joel Winston. Winston JS, O'Doherty J, Kilner JM, Perrett DI, Dolan RJ. (2007) Brain systems for assessing facial attractiveness. Neuropsychologia. 45(1):195-206. 

Thursday 25 July 2013

Further thoughts on pre-registration

As far as I can tell pre-registration of scientific studies has been proposed as a solution to three important problems with the current model of scientific publications: replication, negative results and p-hacking. I personally think that it is great that this format will allow more replication studies to be undertaken and more importantly it will enable the publication of replication studies even if the results are negative. This will be of clear value to the field.

All my reservations about pre-registration concern p-hacking, more specifically how this has been 'sold' to the community. Pre-registration is a solution to the problem of p-hacking. However, those promoting pre-registration have often argued that pre-registered articles should somehow be considered "more truthful" than those that have been published by the traditional route. I strongly disagree with this kind of statement. It is true that pre-registered studies will not be p-hacked, however not all studies that are not pre-registered are p-hacked. The danger of promoting pre-registration as more 'truthful' is that the community will stop believing results from someone who has decided not to pre-register for whatever reason - maybe they wanted the freedom to publish their results wherever they wanted, maybe they did not want to deal with reviewers who might disagree with the experimental design or maybe they just wanted to start the study rather than wait for months before approval.

P-hacking is clearly a problem, particular as it can occur subconsciously. However, at the current time I can not see how pre-registration will work in practice to prevent p-hacking and I worry that it has been promoted in a way that potentially will denigrate equally good science that has been published via a different route.

Friday 5 July 2013

Some Thoughts on Pre-registration

In recent months there has been a lot written about a new route for publishing original research articles in the field of psychology and neuroscience  pre-registration (http://cdn.elsevier.com/promis_misc/PROMIS%20pub_idt_CORTEX%20Guidelines_RR_29_04_2013.pdf and http://www.guardian.co.uk/science/blog/2013/jun/05/trust-in-science-study-pre-registration). As I understand it pre-registration requires the author to submit full details of the methods and analysis of a proposed study for peer review prior to the collection of any data. Once accepted the data of the study can be collected precisely as detailed in the original submission. Any deviation from these methods would then have to be highlighted in the final manuscript. One potential benefit of this format is that it would enable the publication of negative results - something that is very difficult to do currently. This new format is, in my opinion, also excellently suited to direct replication studies as the methods and analyses should be identical to the original study. Indeed this is something that I personally plan to do in the near future.
However, this reason is not the main reason that pre-registration has received so much interest. The main reason is that is promises to restore trust in the scientific process. The reason that sometrust may have been lost is that there is evidence to suggest that researchers are either deliberately or unwittingly analysing their data in many different ways so that their effect of interests reaches the magically p<0.05 threshold. One consequence of this is that the literature is populated with results that are at best unreliable and at worse are false positives. It has been argued that pre-registration would prevent this type of behaviour as the analysis pipeline would have to be declared before any data is analysed. Any different, post-hoc analyses would then be clearly labelled and would have to be treated as such.
There are a number of different aspects to this use of pre registration that bother me. The first is that I worry that this will lead to an incorrect perception that a priori  is more correct and post-hoc is less correct. This is not the case. If both analyses are performed correctly then at a p=0.05 threshold both have the same probability of producing false positives. I worry that results that have not been pre-registered will be viewed as dodgy. It is true that they could be dodgy as a result of endless different analysis attempts, however, I think that we should trust what researchers write in their manuscripts. My biggest concern is that one of the motivations of pre-registration is that we should no longer trust what our peers have said they have done. There is something very negative about this way of thinking and I would prefer to think, perhaps naively, that all scientists are being as honest as they can be. One solution to this problem would be for all researchers to make their data publically available. In this way others can look for themselves at the effects that are reported. This is something that I came across when publishing a paper in the Proceedings of the Royal Society B who require data to be made publically available on datadryad.org.
A second reason that I have misgivings about pre-registration is that it has the potential to stifle exciting unusual research and the reporting of unexpected results. One of the things I really enjoy about my work is the possibility that with each experiment I might discover something unexpected that changes how I think about my field of research. The fact this has not happened to me to date does not diminish this excitement. Indeed the field of action observation would not really exist without the unexpected discovery of mirror neurons (although others might argue that this is a good argument for pre-registration!). I am not sure how excited I would be about experimentation if it were the end result of months of writing and revising pre-registration papers that would then treat post-hoc analysis of my exciting unexpected result as less truthful.
My third concern is how pre-registration will work in practice. For example, if I was to pre-register an fMRI study. I presumably would have to give details of all the scanning parameters, all subject details (numbers, how many female, ages etc.), all pre-processing parameters, the planned design matrix, the planned contrasts and any a priori regions of interest and how I would specify them, either from the data or from another source. Then a reviewer would have to review this to determine if all this was correct. I would then not be able to deviate from this. So if one of my subjects moved a lot in the scanner and I had not written this in the pre-registration document, then I assume the study would then be classified as post-hoc as I had not included all exclusion criteria in the pre-registration. My worry is that there are so many degrees of freedom in any study that to cover all possible outcomes in the pre-registration document would be unnecessarily burdensome. However, not to include them would be against the principle of pre-registration. I am sure many of these things will be ironed out when people start to pre-register their studies, but I am concerned that the level of detail required to pre-register a study and the level of detailed expertise required to review these documents will mean that pre-registration will not be able to work in practice as intended.