Question
Asked 15th Aug, 2012

BOLD signal change data analysis. Problems with correlations analyses.

I am looking for advice about conducting correlations using fMRI BOLD effect (% signal change) data. I am trying to correlate these data with subjective appetite sensation scores (collected using visual analogue scales), appetite hormone concentrations and body temperature data. I have analysed the fMRI data using FSL Library software and conducted the correlations analyses using SPSS, however I cannot find any meaningful correlations. Previous research consistently identifies correlations between neural activation within areas of the brain which regulate the rewarding properties of food, and appetite sensations and gut hormone concentrations. Can correlations be conducted using FSL Library software?

Most recent answer

Erwin Lemche
King's College London
Dear Daniel,
in principle, the equations used for calculations of the Pearson correlation coefficient should be the same, regardless whether you calculate by hand, by pocket calculator or by using inference software ☺
However, it may make a difference if you are using fMRI inference software, for e.g. whole-brain analyses versus using masks for ROIs (as in SPM). Other procedures implemented in inference software can influence the results. Some inference packages, such as XBAM or also SPM (on demand) use permutation resampling procedures, which lower alpha levels for statistical significance. Therefore, such randomization procedures typically make fMRI software more “sensitive” to detect correlations.
It is advisable, however, to be able to demonstrate a covariation both in fMRI inference software and in behavioral statistics. This is more convincing to reviewers…
One more issue I had previously forgotten to speak about is: One should also look at whether one’s (behavioral and physiological) data conform to a normally distributed population. This can be done e.g. in STATA using Shapiro-Wilk tests and/or Q-Q plotting. Deviations from normality may bias your results. Often, physiological data are not really normally distributed for reasons e.g. of internal secretion. In psychophysiology it is then state-of-the-art to use transformations (log, log-10, arcsin etc. ) to remedy for that. Such procedures are implemented in SPSS and can be used conveniently.
So if you have some nil correlation, it is useful to look for possible reasons outside the fMRI inference software first. If the reasons can be identified, this will also improve your correlation images.
Good luck, Erwin
2 Recommendations

Popular answers (1)

Simon B Eickhoff
Forschungszentrum Jülich
Dear Daniel
You can (and should) use the obtained behavioural scores as covarates (explanatory variables) in your FSL analysis. This could either be done on the single-subject level (for trial by trial variations) or at the 2nd (group) level when looking for correlations across subjects. This will allow you to identify brain regions where the BOLD signal covaries with your behavioural scores. Just do a quick web or pubmed search on "parametric modulation" for lots of material on this topic. Also, have a look at the FSL course notes for more detailed descriptions on the technical aspects of implementing these.
Extracting percent signal changes and correlating them offline in SPSS, however, is probably not the best idea. One of the main problems here: Where to extract these. If you don't use an independent localizer, you quickly end up in the "double-dipping" problem. When using a localizer, however, you are making the rather strong assumption, that the same area identified by the localizer is also modulated.
Simon
5 Recommendations

All Answers (11)

Yingying Wang
University of Nebraska at Lincoln
I think you should be able to use FSL software: flame
1 Recommendation
Simon B Eickhoff
Forschungszentrum Jülich
Dear Daniel
You can (and should) use the obtained behavioural scores as covarates (explanatory variables) in your FSL analysis. This could either be done on the single-subject level (for trial by trial variations) or at the 2nd (group) level when looking for correlations across subjects. This will allow you to identify brain regions where the BOLD signal covaries with your behavioural scores. Just do a quick web or pubmed search on "parametric modulation" for lots of material on this topic. Also, have a look at the FSL course notes for more detailed descriptions on the technical aspects of implementing these.
Extracting percent signal changes and correlating them offline in SPSS, however, is probably not the best idea. One of the main problems here: Where to extract these. If you don't use an independent localizer, you quickly end up in the "double-dipping" problem. When using a localizer, however, you are making the rather strong assumption, that the same area identified by the localizer is also modulated.
Simon
5 Recommendations
Annette Beatrix Brühl
Universitäre Psychiatrische Kliniken Basel
Dear Daniel
Just one advice: Read the Yarkoni et al. (2009) paper on "Big correlations in little studies: inflated fMRI correlations reflect low statistical power" regarding a) statistical power or number of subjects needed for correlative analyses in fMRI and b) the danger of applying a too high statistical threshold.
Best
Annette
3 Recommendations
Wouter Schellekens
University Medical Center Utrecht
I think that correlations can be calculated through FSL. However, why would those results be any different from the SPSS results? They use the same statistics...
2 Recommendations
Erwin Lemche
King's College London
Dear Daniel,
it is always a good idea to examine your behavioral and physiological variables for possible confoundation: sociodemographic (age, sex, edu, ses), and/or measurement-related (daytime, seasonal fluctuations, etc.). As in behavioral or psychophysiological studies, it can be helpful to look at the degree of covariation by inspection of scatterplots in stats packages such as SPSS or STATA. This may reveal mediatory or modulatory effects of one of your measurements, whereby a true existing correlation is being suppressed by another variable (suppressor variable). A classical instruction is the paper by Baron & Kenny (1986): "The mediator moderator distinction..." Journal of Personality and Social Psychology Vol. 51, No. 6, 1173-1182. If you can identify one or more confounders, these should be entered as nuissance regressors in your FSL models. Have you tried whole brain correlations with your self-report and hormone data and experimental activation group maps before extracting percentage signal changes from certain blobs?
1 Recommendation
Luca Nanetti
University of Groningen
Dear Daniel,
as a follow-up of Simon Eickhoff's suggestion (to use the behavioural scores as covariates), remember to demean them (google; Jeannette Mumford demean covariate).
Another approach you may want to try is the PPI (Psyco Phisyological Interactions), see http://www.fmrib.ox.ac.uk/Members/joreilly/what-is-ppi for the FSL implementation, where the BOLD signal is correlated with the behavioural data in an extremely refined way (google: friston gitelman ppi). Everything can be performed using SPM, of course.
Kind regards,
luca
2 Recommendations
Daniel Crabtree
University of Aberdeen
Thank you to everyone for your answers. I will, along with my collaborator at Imperial College London, begin to investigate the answers that you have all provided. I hope that I can contact you if I have any follow-up questions. One further question, could anyone answer Wouter Schellekens question regarding the differences between FSL analysis and SPSS analysis? Thank you all once again
Alexander Stevens
Oregon Health and Science University
Perhaps I missed it but what is your sample size? To reinforce sine's and Annette's comments, unless you have a reasonable sample (N >30) you should be extremely cautious about any correlations you find, particularly if they are post hoc ROIs. With a smaller sample size you won't have adequate power to detect correlates and if you find significant effects, you should be suspicious of them.
1 Recommendation
Luca Nanetti
University of Groningen
About the difference between FSL and SPSS: the key could be maybe sought in the amount of simultaneous tests. Functional brain = ~ 150k voxels = 150k correlations = dramatic high chances of false positives due to chance. If I am not mistaken, the default approach of FSL is to use the Gaussian Random Field theory to address the multiple comparison problem: basically, how likely it would be that one cluster ('blob of active voxels') of a certain size would happen only due to chance? I am not familiar enough with SPSS to rule out the possibility that such a correction exists; but if it is not applied, then the results would be different.
kind regards,
luca
Erwin Lemche
King's College London
Dear Daniel,
in principle, the equations used for calculations of the Pearson correlation coefficient should be the same, regardless whether you calculate by hand, by pocket calculator or by using inference software ☺
However, it may make a difference if you are using fMRI inference software, for e.g. whole-brain analyses versus using masks for ROIs (as in SPM). Other procedures implemented in inference software can influence the results. Some inference packages, such as XBAM or also SPM (on demand) use permutation resampling procedures, which lower alpha levels for statistical significance. Therefore, such randomization procedures typically make fMRI software more “sensitive” to detect correlations.
It is advisable, however, to be able to demonstrate a covariation both in fMRI inference software and in behavioral statistics. This is more convincing to reviewers…
One more issue I had previously forgotten to speak about is: One should also look at whether one’s (behavioral and physiological) data conform to a normally distributed population. This can be done e.g. in STATA using Shapiro-Wilk tests and/or Q-Q plotting. Deviations from normality may bias your results. Often, physiological data are not really normally distributed for reasons e.g. of internal secretion. In psychophysiology it is then state-of-the-art to use transformations (log, log-10, arcsin etc. ) to remedy for that. Such procedures are implemented in SPSS and can be used conveniently.
So if you have some nil correlation, it is useful to look for possible reasons outside the fMRI inference software first. If the reasons can be identified, this will also improve your correlation images.
Good luck, Erwin
2 Recommendations

Similar questions and discussions

Recommendations

Article
Full-text available
The purpose of this article is to present some main concepts about neurological-science, specifically the ones established by Francisco Varela, Antonio Damasio, Shaun Gallagher and Alva Noé. The aim is to analyze and to discover encounter points between the neurological-phenomenology of Francisco Varela and learning, based on the assumption that th...
Sun, Wind, and Power Trading
Sponsored Content
Diverse causes behind frequency fluctuations in power grids
The use of renewables like the sun and wind can cause fluctuations in power grids. But what impact do these fluctuations have on security of supply? To answer this question, scientists from Jülich and Göttingen worked together with colleagues in London and Tokyo to analyse different types of fluctuations in several power grids in Europe, Japan, and the USA – and came to surprising conclusions. Their study was published today in the peer-reviewed journal Nature Energy.
Our power grid works at a frequency of 50 hertz – usually generated by turbines, for example in hydro- or coal power plants, which rotate at a speed of 50 revolutions per second. "When a consumer uses more electrical energy from the power grid, the grid frequency drops slightly before an increased energy feed-in re-establishes the original frequency," explains Benjamin Schäfer from the Max Planck Institute for Dynamics and Self-Organization (MPIDS) in Göttingen and lead author of the study. "Deviations from the nominal value of 50 hertz must be kept to a minimum, as otherwise sensitive electrical devices could be damaged."
Renewable energy generation also causes grid frequency fluctuations because the wind does not always blow at the same speed and clouds constantly alter the feed-in from photovoltaic systems. A frequent suggestion for integrating renewable energy generators into the power grid involves breaking the grid down into small autonomous cells known as microgrids. This would allow a community with a combined heat and power unit and its own wind and photovoltaic generators, for example, to operate its energy systems in an autonomous manner.
But what impact would this division into small cells and additional renewable generators have on the power grid? To answer this question, scientists from Forschungszentrum Jülich and MPIDS analysed the grid frequency fluctuations in power grids in different regions of the world – and using mathematical models, they predicted potential vulnerabilities and their causes.
Two surprises in one analysis
Firstly, they collated measurements from Europe, Japan, and the USA. Then, they systematically analysed the data and were surprised on two accounts. "The first surprise was that the grid showed particularly strong fluctuations every 15 minutes," says Dirk Witthaut from Jülich’s Institute of Energy and Climate Research und the Institute for Theoretical Physics of the University of Cologne. "This is the exact time frame during which generators on the European electricity market agree on a new distribution for the electricity generated – this alters how much electricity is fed into the grid, and where. In Europe at least, power trading therefore plays a key role in balancing grid frequency fluctuations."
The second surprise was that statistical grid fluctuations around the nominal value of 50 hertz do not follow the expected Gaussian distribution, which is a symmetrical distribution around an expected value. Instead, more extreme fluctuations are probable. Using mathematical models, the scientists calculated the expected fluctuations depending on the grid size and estimated the degree to which the fluctuations depended on renewables.
Power trading as a key factor
A comparison of the investigated regions showed that a large proportion of renewables did indeed lead to greater grid fluctuations. "For example, the share of wind and solar generation in the United Kingdom is much higher than in the USA, leading to greater fluctuations in grid frequency," explains Dirk Witthaut. For an increased share of renewables, the scientists therefore recommend increased investment in an intelligent adjustment of generator and consumer according to the grid frequency – known as primary control and demand control.
One of the most interesting findings of the study, however, is that grid frequency fluctuation caused by power trading appeared to be more significant than fluctuation caused by renewable feed-in.
The scientists also discovered that small power grids show larger fluctuations. "Our study indicates that dividing large and thus very slow grids – such as the synchronous grid of Continental Europe – into microgrids will cause larger frequency fluctuations," says Benjamin Schäfer. "Technically, microgrids are therefore only an option if today's very stringent frequency standards were to be relaxed."
Publication: "Non-Gaussian Power Grid Frequency Fluctuations Characterized by Lévy-stable Laws and Superstatistics" by Benjamin Schäfer, Christian Beck, Kazuyuki Aihara, Dirk Witthaut, Marc Timme, Nature Energy, DOI: 10.1038/s41560-017-0058-z
More information
Institute of Energy and Climate Research, Systems Analysis and Technology Evaluation (IEK-STE), Forschungszentrum Jülich
Contact
Dr. Dirk Witthaut
Institute of Energy and Climae Research, Systems Analysis and technology Evaluation (IEK-STE)
Phone.: +49 2461 61-3397
E-Mail: d.witthaut@fz-juelich.de
Finally Up and Running
Sponsored Content
Silicon-air battery achieves running time of over 1,000 hours for the first time
Silicon-air batteries are viewed as a promising and cost-effective alternative to current energy storage technology. However, they have thus far only achieved relatively short running times. Jülich researchers have now discovered why.
In theory, silicon-air batteries have a much higher energy density and are also smaller and lighter than current lithium-ion batteries. They are also environmentally friendly and insensitive to external influences. Their most important advantage, however, is their material. Silicon is the second most abundant element in the Earth's crust after oxygen: it is cheap and its reserves are practically inexhaustible.
However, the silicon-air battery does still have a few crucial blemishes: for example, the flow of current stops after a relatively short period of time. Only assumptions have been made thus far as to why this is the case: does a protective layer form spontaneously on the silicon anode? Is the electrolyte at all suitable? Is there a problem with the air electrode? Attempts to rectify this problem by improving the components have proven to be less than successful. The best result was achieved through the use of a special, high-quality electrolyte based on an ionic liquid. This helped increase the battery’s running time to several hundred hours, but contradicted the fundamental idea of the battery: to provide a cost-effective alternative to lithium-ion batteries.
Scientists at Jülich’s Institute of Energy and Climate Research (IEK) suspect another cause for the short running time: the consumption of the electrolyte. As part of the AlSiBat project funded by Germany’s Federal Ministry of Education and Research, the researchers developed a pump system in which the electrolyte fluid – potassium hydroxide dissolved in water – was refilled from time to time. “If the silicon anode remains in contact with the electrolyte, the battery will continue running,” explains Hermann Tempel from the IEK’s Fundamental Electrochemistry. The battery is thus able to achieve a running time of over 1,100 hours, or almost 46 days, he adds. “Until the silicon is fully used up. The battery can subsequently be recharged by exchanging the anode, in other words mechanically.”
The scientists are now looking for a way to keep the battery running without having to refill the electrolyte. “We need to stop the battery from self-discharging,” explains Hermann Tempel, noting how this leads to the electrolyte fluid being used up. Additives in the electrolyte could help here, he says. “The battery is not yet perfect, but we now know what we have to work on.”
Test set-up for the silicon battery: the battery itself is only the size of a button cell and is located in the hollow cylinder in the middle of the acrylic glass casing. The thin channels that pass through the housing control the supply and outlet of the electrolyte fluid.
Copyright: Forschungszentrum Jülich
Learn more:
Institute of Energy and Climate Research – Fundamental Electrochemistry
Interview with the inventor of the silicon-air battery, Prof. Yair Ein-Eli from Technion - Israel Institute of Technology
Contacts:
Prof. Rüdiger-A. Eichel
Director at the Institute of Energy and Climate Research
Fundamental Electrochemistry (IEK-9)
Tel: +49 2461 61-4644
Email: r.eichel@fz-juelich.de
Dr. Hermann Tempel
Institute of Energy and Climate Research
Fundamental Electrochemistry (IEK-9)
Tel: +49 2461 61-96570
Email: h.tempel@fz-juelich.de
Got a technical question?
Get high-quality answers from experts.