Science topic

Functional Neuroimaging - Science topic

Explore the latest questions and answers in Functional Neuroimaging, and find Functional Neuroimaging experts.
Questions related to Functional Neuroimaging
  • asked a question related to Functional Neuroimaging
Question
1 answer
Hi all,
I've found that interstimulus intervals (rest conditions) can be about 4 to 9 s in event-related design using fNIRS (compared to block design, which needs 15 to 20 s intervals between stimuli). However, I've not been able to find the detailed reasoning behind this. Does anyone have related research or information I can look into?
Thank you in advance.
Relevant answer
Answer
Maybe this paper will help here: https://pubmed.ncbi.nlm.nih.gov/17258472/
cheers
Michael
  • asked a question related to Functional Neuroimaging
Question
4 answers
I have a large longitudinal dataset comprising both healthy and clinical groups. The sample size for the clinical subsample drops off much more vs healthy subjects. There is missing data throughout the dataset, and across different measure types, and the amount of missingness varies by measure and time points. What are some of the best ways to address missing data for longitudinal modelling in neuroimaging, functional neuroimaging, and behavioural data? What kind of imputation methods are most robust?
Relevant answer
Answer
Assuming the mechanism is "missing at random" (MAR) or "missing completely at random" (MCAR), you could use full information maximum likelihood (FIML) or multiple imputation to address missing scores. Both methods are based on the same mathematical theory and often lead to similar results.
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
  • asked a question related to Functional Neuroimaging
Question
6 answers
I am trying to programming fMRI (BOLD) data processing via SPM12 using MATLAB. I have four stimuli that each participant should note and respond according to each stimulus. When I want to set conditions for 1st-Level, in data some runs lack some stimuli (some of the stimuli have not been used). for example, there is no second-stimulus in run-2, but in others have been used.
Now, should I define an empty vector/zero value for "Onsets of the Condition" or in general I should not set a second-stimulus condition for run-2?
Relevant answer
Answer
Hello Amin,
just wonder if the empty array is still working for you?
With SPM 12 v. 7487 I get a message:
Item 'Onsets', field 'val': Size mismatch (required [Inf 1], present [0 0])
This suggests that at least in the somewhat recent SPM versions there is an internal check for the size of the onsets array.
  • asked a question related to Functional Neuroimaging
Question
3 answers
Dear colleagues.
Hello, I am currently studying brain connectivity in the disease group.
Recently, I constructed a connectivity matrix from each participants' neuroimaging data (diffusion tensor imaging), then ran the edge-wise correlational analysis with the neuropsychological score using a tool similar to Network-based statistics (a.k.a NBS; Zalesky et al., 2010).
As a result, I got an edge-level network consisting of 10 edges with 19 nodes.
Those significantly denoted edges are not connected to each other but identified as single edges.
Conventionally, I used graph theoretical measures such as a degree or betweenness centrality for defining hub node (i.e. hub region = betweenness centrality + 1SD within the nodes of the network).
However, in this case, I have an edge-level network that is hard to say is clustered or connected but consisted of multiple single edges.
From here, I want to specifically emphasize more significant edges or nodes within the identified network as a hub region (well it is hard to say it is a hub, but at least for easy comprehension), but I am quite struggling with what approach to take.
All discussion and suggestions are welcomed here.
Or if I am misunderstanding any, please give me feedback or comments.
Thank you in advance!
Jean
Relevant answer
Answer
For example, you could invent a measure based on cluster coefficients.
You could consider all vertices that lie in a shortest edge distance up to a certain maximum, and count all edge in it.
Also, it can be interesting to determine the number of cliques a vertex takes part in. Or the distribution of the number of vertices in these cliques. Or the overlapping percentages of any pair of cliques a vertex is in.
Regards,
Joachim
  • asked a question related to Functional Neuroimaging
Question
12 answers
My friend is looking for coauthors in Psychology & Cognitive Neuroscience field. Basically you will be responsible for paraphrasing, creating figures, and collecting references for a variety of publications. Please leave your email address if you are interested. 10 hours a week is required as there is a lot of projects to be done!
Relevant answer
Answer
Will message you.
  • asked a question related to Functional Neuroimaging
Question
3 answers
Wish to analyse data for rest-state functional connectivity and for DTI
  • asked a question related to Functional Neuroimaging
Question
7 answers
I need a CT images database of brain (both normal cases and abnormal cases ) for calculating the midline shift in CT images. 
Relevant answer
Answer
  • asked a question related to Functional Neuroimaging
Question
1 answer
I have been working with FSL software (melodic & dual_regression) to parcellate intrinsic connectivity networks both at a group and subject level. I successfully ran dual_regression and have the outputted z-maps for each subject and component. I want to create a summary statistic for each outputted z-map so that I can compare differences across participants using a dimensional framework. I am not interested in analyzing subgroups, even though all the FSL documentation is all about group differences.
I am wondering how I could calculate a summary statistic for each z-map that represents the network strengthen/integrity of a given subject. I basically want to know how over-expressed or under-expressed is a given network is for each subject. I want to use this approach to test a hypothesis regarding individual differences in symptoms of depression. I read this in a few books and papers, but no one has explained how to quantify these subject-level networks from nifti files into numeric summary scores that could be inputted into a multiple regression.
Attached here is my group ICA output that was manually capped at 30 components (melodic_IC.nii.gz) and a couple dual regression output files (original and z-maps) from two subjects to give you an idea about what I am working with. The 3rd component is of most interest as it most resembles the default mode network. Please help someone!
Relevant answer
Answer
Upon more thinking and reading, I think the easiest solution is to use fslmeants to simply extract the mean beta weight across all voxels from the Stage-2 Output Maps. I did multiple sanity checks and find that the mean values and individual beta maps are fairly comparable (see bottom row of attached screenshot).
It also helps to apply smoothing prior to group ica (kernel = 3mm) to get cleaner components that more resemble functional networks in case anyone runs into this problem in the future.
  • asked a question related to Functional Neuroimaging
Question
4 answers
I'm looking for publications on the alleviation of fear (or any negative emotion) not from a therapeutic, but rather a functionnal, cognitive approach. Ideally some fMRI recording of subjects experiencing a feeling of relief.
If the reverse has been done on the feeling of disappointment, that'd be great too.
I'll take anything from cognitive neuroscience, neuropsychology, anatomical clinical method, and cognitive psychology.
My eternal gratitude to whomever can help.
Relevant answer
Answer
There are many books out there on the control of emotion. Work I did years ago delineated how the training of combat soldiers had such an effect.
  • asked a question related to Functional Neuroimaging
Question
4 answers
Hello,
I am writing my thesis on the effects of mindfulness meditation on reading in dyslexic children and I am looking for research on how reading works in the brain (what areas of the brain are activated, what mechanisms are involved...) that I would like to relate to the areas of the brain that are activated during meditation.
I know that now, thanks to neuroimaging, we have more accurate data but I can't find any solid documents on it.
Thank you for reading my message and I hope to find answers here.
Good continuation in your work and research.
Sandrine BRASSE
Bonjour,
Je fais mon mémoire sur les effets de la méditation de pleine conscience sur la lecture des enfants dyslexiques et je cherche des recherches sur le fonctionnement de la lecture dans le cerveau (quelles zones du cerveau sont activées, quels mécanismes sont mis en jeu ...) que j'aimerais mettre en relation avec les zones du cerveau qui sont activées pendant la méditation.
je sais que maintenant, grâce à la neuro imagerie, nous avons des données plus précises mais je ne trouvent pas de documents solides là dessus.
Merci d'avoir lu mon message et j'espère trouver ici des réponses.
Bonne continuation dans vos travaux et vos recherches.
Sandrine BRASSE
Relevant answer
Answer
I try to answer not with what i studied for my neurology exam.
Reading is a really complex function, which, in the normal person, actually stresses out several different brain skills as auditory, visuospatial and logic ones.
Basically in the individual without hearing problems, reading, both with loud voice or in his mind, means transforming the graphems into their correspondant sound, and after having "heard" it understanding through wernicke's area ad insular language centre.
People who, themselves, have hearing problems since their birth, are probably able to skip this two-phase process and, directly send the graphems to be logic matched with a meaning in the language area, without converting them into sound.
Sounds clear that the temporoparietal networks where the recognition of graphems as meaningful visual elements must function in order to keep this process, and it is here, where the visual part of reading happens, that dyslexia happens, if something goes wrong.
You can find these information on Adams and Victor's principles of Neurology.
Good luck for your thesis!
  • asked a question related to Functional Neuroimaging
Question
5 answers
Most of researchers are using fMRI for analyzing dynamic functional connectivity networks, I want to know if it is possible to use EEG as well and if so, what s the advantageous and disadvantageous of using EEG in comparison to fMRI, except temporal resolution of EEG
Relevant answer
Answer
Hi, EEG is already used for dynamic connectivity analysis and in my opinion it is much better tool for this (because of temporal resolution). The neuronal connectivity in EEG is also known for much longer than fMRI (however named differently, as coherence or DTF). There are few things that you have keep in mind to do this correctly:
1. You need high density EEG - at minimum 32 channels, recommended 64 and above
2. Very good data quality (i.e. high SNR).
3. There are a lot of papers suggesting that EEG connectivity analysis should be performed using source space and not sensor space. So first data has to be recomputed to source space.
I
think fieldtrip has some solution for EEG connectivity analysis if you need freeware tool. You might also check BESA Connectivity
PM me in case you need any more info about Besa Connectivity (I do not want to make advertisements here :)
  • asked a question related to Functional Neuroimaging
Question
2 answers
I am performing Granger Causality analysis on a dataset of fMRI data. I defined a set of regions of the network, which show up beautifully in the RFX-GLM map, and a control, white-matter region, of the same size of the remaining interest regions.
I developed a pipeline of analysis based on MVGC (https://users.sussex.ac.uk/~lionelb/MVGC/html/mvgchelp.html), which considers a number of subjects and runs for the calculation, and exports a matrix of significant connections between the regions defined.
What I struggle to understand is the following: a very statistically strong connection shows up between one of my interest regions and the control one. Even after correcting the results with a number of surrogate methods, the connection persists.
I would like an opinion on this matter! Is the analysis invalidated by this result? Or is there something that could explain a strong connection with a region which is no more than noise and uncorrelated with all other regions?
Thank you in advance!
Relevant answer
Answer
Dear Edgar Guevara , the fMRI data is not resting state, but rather from a visual motion perception task. I forgot to mention that in the text.
  • asked a question related to Functional Neuroimaging
Question
7 answers
I'm currently designing a pipeline for the pre-processing of MEG data in Fieldtrip but keep on having some issues with events.  Having compared the data loaded in with that loaded in using Brainstorm I've found that FT fails to load in the complete number of samples (and events).  
Has anybody else had this problem, and if so how did you fix it?
Thanks
Relevant answer
Answer
Hi,
as said and expected the underlying functions devoted to the .fif reading should be the same in both toolboxes.
 If you want to dig deeper, take a look on how the example file 
mne_ex_read_raw
uses
fiff_setup_read_raw
So, if i do this now on a random fif file on my computer, I get
info = fiff_setup_read_raw(fileName)
the output  is:
Opening raw data file blahblah.fif...
Read a total of 13 projection items:
generated with autossp-1.0.1 (1 x 306) idle
[13 times repeated] ...
Range : 16000 ... 654999 = 16.000 ... 654.999 secs
Ready.
info =
fid: -1
info: [1x1 struct]
first_samp: 16000
last_samp: 654999
cals: [1x467 double]
rawdir: [1x639 struct]
proj: []
comp: []
As you can see, the first sample in my measurement is 16000 and the last is 654999, but I'm neither surprised nor worried of this behaviour: this happens just because, in this particular acquisition file, between the "Go" button press and the "Record raw" ticking passed exactly 16 seconds. In this 16 seconds very likely someone just checked the HPI, and scrolled through the MEG channels to take a look, and then pressed "Record raw". These same numbers you will get as well if you use elekta's very own "show_fiff" utility on the acquisition console. This is a "finnish peculiarity", dunno otherwise how to call the fact that in the final file they start counting from 16000 instead of 0 ... 
So, my take is that you had 10 seconds (at 1 kHz sampling) before ticking the "Record raw" check box. *But* if you say that you have missing conditions/trials/triggers in your final list, then the only answer that I have is that someone started the stimulation paradigm *before* ticking the "record raw": once in a while happens also to me with young students . Don't put the blame on the analysis software, or on the acquisition, go back in the lab and slap someone randomly ... :-)
HTH
  • asked a question related to Functional Neuroimaging
Question
4 answers
I am running a script based on different batches in SPM on a dataset with 26 participants (2 sessions and 3 runs). This dataset has been analysed before in FSL (2 publications with statistical analysis) and I am doing a reanalysis using SPM12. The code runs smoothly for some of the participants and does the spatial preprocessing and the processing by defining a GLM and then moving on to DCM to extract the DCM.Ep.A (I have modified the sum_run_fmri_spec in config so that it stops asking for a confirmation for over writing the SPM file).
Can this be because for regressing out the WM and CSF I have two different jobs? This is the same when I want to extract the four VOIs.
I have done this according to the sample script in the practical example for rs-DCM.
Unfortunately, it works well with the exception of a few participants/runs. 
The error message is:
------------------------------------------------------------------------
Running job #1
------------------------------------------------------------------------
Running 'Volume of Interest'
Warning: Empty region.
> In spm_regions (line 155)
In spm_run_voi (line 63)
In cfg_run_cm (line 29)
In cfg_util>local_runcj (line 1688)
In cfg_util (line 959)
In spm_jobman>fill_run_job (line 458)
In spm_jobman (line 247)
In spDCM_fun_test (line 207)
Failed 'Volume of Interest'
Reference to non-existent field 'v'.
In file ".../spm12/config/spm_run_voi.m" (v6301), function "spm_run_voi" at line 76.
The following modules did not run:
Failed: Volume of Interest
Error using MATLABbatch system
Job execution failed. The full log of this run can be found in MATLAB command
window, starting with the lines (look for the line showing the exact #job as
displayed in this error message)
------------------
Running job #1
------------------
Is there anything that I should do to fix this? 
On a different note, I had to modify the code in DCM specification:
Sess = SPM.Sess(xY(1).Sess);
as this was causing errors as xY(1).Sess is 2 at points, and not 1. Therefore, I changed it to:
Sess = SPM.Sess(1);
Is it going to be problematic? And, if yes, is there any other way of fixing the issue?
I have three "for" loops for the dataset, one for the subjects, one for the sessions, and the last one for the runs... I managed to get the DCM.mat for 16 subjects out of 26 (and then it stops for the error I get in the last section). Then I am just taking the average of the DCM.Ep.A for each subject in each session (which I am not sure is the right thing to do - the alternative method can be doing a BMS RFX to find out which model is best and just use the DCM.Ep.A for that model). The data includes samples taken from 26 subjects in two sessions and three runs.
Unfortunately, the RFX BMS does not seem to be working outside the GUI (when I change the values) either. The script generated by the batch seems to be like (I have changed it slightly):
clear matlabbatch;
mkdir([session_folder_name '/func/BMS']);
matlabbatch{1}.spm.dcm.bms. inference.dir = ...
            cellstr([session_folder_name '/func/BMS']);
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 1,1} = ...
            cellstr([session_folder_name '/func/Run01/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 2,1} = ...
            cellstr([session_folder_name '/func/Run02/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 3,1} = ...
            cellstr([session_folder_name '/func/Run03/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.model_sp = {''};
matlabbatch{1}.spm.dcm.bms. inference.load_f = {''};
matlabbatch{1}.spm.dcm.bms. inference.method = 'RFX';
matlabbatch{1}.spm.dcm.bms. inference.family_level.family_ file = {''};
matlabbatch{1}.spm.dcm.bms. inference.bma.bma_no = 0;
matlabbatch{1}.spm.dcm.bms. inference.verify_id = 0;
spm_jobman('run',matlabbatch);
However, this gives me an error as well (The Model seems to be empty and I cannot fix it, even though all the variables and the structure in the script above seem to be fine).
On a different note, I was wondering whether there is a way of finding out which model is the best one, other than just looking at the graphs generated using the GUI - I realised that this is a question some of the other SPM users in the list have as well. If not, it seems like I have to run the script for each session and each subject and look at the graphs... which does not seem reasonable. If there is a quantitative way of finding the best model for each session in BMS, then I can easily select the DCM.Ep.A for that model and use it in my further analysis. Also, what if the best model is different for each subject, e.g., model 3 for subject 1, model 2 for subject 8? Can they then be used in the analysis?
Many thanks and apologies for the very long email with millions of different questions!
Thanking you in advance,
Amir
Relevant answer
Answer
Hi Amir,
Concerning the BMS, you could have a look at the VBA toolbox (http://mbb-team.github.io/VBA-toolbox/wiki/BMS-for-group-studies/). It is independent of SPM, so you will have to extract the model evidence of each estimated model-subject (DCM.F if I remember correctly), but you can script all types of model comparison (RFX, between condition, between groups).
Also, as a general rule, and even if all subjects are best described by the same model, always work with the Bayesian Model Average (BMA) for your post-hoc analyses on the parameters.
I hope this helps.
Best
LIonel
  • asked a question related to Functional Neuroimaging
Question
5 answers
Dear all,
I've been using ROI analysis for fMRI data analysis for a while, to search for only regions of interest supported by previous data or literature.
But SPM has a SVC button with a similar logic behind. So what are the differences between the two approaches? Thanks a lot.
Andy
Relevant answer
Answer
As far as I know when you use ROI analysis, you have one averaged value per ROI (like ROI is you one big voxel, you assume that all voxels in this ROI do the same work (function) and when you perform SVC, you still have all your voxels with their values in you volume if interest but after you chose this volume for analysis you have fewer voxels than in the whole brain and you activation within this volume of interest is able to survive after correction for multiple comparisons. I think - this is the main difference.
  • asked a question related to Functional Neuroimaging
Question
1 answer
I want to do neuronal imaging of the PFC of mice and I am in need of head plates I can attach to the head of the animals to keep it stable during my awake experiment. Does anyone know a company who commercially sells those ?
Thanks
Relevant answer
Answer
You'd still need to figure out how to hold the head plate in place (assuming you didn't want to buy Neurotar's system). 
  • asked a question related to Functional Neuroimaging
Question
2 answers
I am using psychopy builder and created a code section to have a progress bar, but I can't seem to simplify the code, so that when the progress bar expands, I dont have to keep writing the same code over...
In Begin Experiment I have:
ProgVertices = [[-.5,0],[-.5,.5][-.2,.5],[-.2,0]]
counter = 0
Test1 = visual.ShapeStim(win, lineColor = 'green', vertices = ProgVertices, pos = (0,0), autoDraw = True))
In Each Frame tab I have:
if counter < 10:
    core.wait(1)
    counter = counter +1
if counter >= 6:
   ProgVertices[2] = [0.5,0.5]
   ProgVertices[3] = [0.5,0]
  Test1 = visual.ShapeStim(win, lineColor = 'green', vertices = ProgVertices,    pos = (0,0), autoDraw = True))
This works but I feel like its redundant code if I wanted to update the counter a little every second (as opposed to just at 6 seconds). Any help to simplify the code would be appreciated!
Thanks
Relevant answer
Answer
I am not sure that I understand the question correctly, but you can always write recursive code in a function and then simply call the function. Otherwise you can build a whole class around the progress bar and update the class whenever needed.
If that does not answer your question, a little more detail to your problem would be beneficial.
Cheers,
Martin
  • asked a question related to Functional Neuroimaging
Question
7 answers
I'm interested in pain perception during pregnancy, and wondering if there are any safe and acceptable methods of functional neuroimaging in this population? Thanks. 
Relevant answer
Hi Rhiannon,
in my experience, MEG and EEG are friendly techniques in case of pregnacy because they both only sense the electromagnetical activity emerging from the brain. As Stephen mentioned MEG have been perfomed already in fetal studies. Personally I worked with Magnetocardiography in pregnet women to record the cardiac activity of multiple fetus (twins and triplets) during weeks without secondary efects (Processing the magnetocardiographic signal in the identification of fetal and maternal heart beats in a triplet pregnancy).
However, in case of CT, fMRI, PET and NIRS I have my doubts. 
Klein and  Hsu wrote an interesting article touching this topic. This can be a useful reference for you. Take a look: Neuroimaging during pregnancy (http://www.ncbi.nlm.nih.gov/pubmed/22113508).
Here you have another reference about fetal MRI: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4515352/ .
There are studies and guidelines already for some neuroimaging techniques applied during pregnacy:
Compendium of National Guidelines for Imaging the Pregnant Patienthttp://www.ajronline.org/doi/abs/10.2214/AJR.10.6351
Imaging of Pregnant and Lactating Patients: Part 2, Evidence-Based Review and Recommendations: http://www.ajronline.org/doi/abs/10.2214/AJR.11.8223
On the other hand, here is an rticle about side effects:
I hope these information can be useful to you. Probably you already read those articles.
Have a nice day. All the best for your research !
  • asked a question related to Functional Neuroimaging
Question
7 answers
In computer Science,  to validate the  work, we use set of benchmark function and compare the results with previous work. In neuroimaging field, how can I validate my work if there is no previous works?
Relevant answer
Answer
To me, generating data from different methods and checking the consistency of the results is a good way to validate the robustness of your method. Maybe our fields are different, in our field (neuroscience), robustness is not enough, people would like to see that you use some results to show your finding is not from noise. To do so, you may need to test the biological meaning by direct (i.e., by changing the subject's status to see if status changing modulate your result) or indirect ways (i.e., correlation with previously widely-used metrics).  These are related to "validity".   Hope it helps. 
  • asked a question related to Functional Neuroimaging
Question
6 answers
I have some precious data which somehow the functional EPI data shows distortion in the bottom frontal lobe. This mismatch of functional data and structural data will definitely cause the bad performance on the rigid body coregistration, and finally lead to the poor normalization.
So, it that reasonable if I do nonlinear coregistration between the func and struc data? And what matlab-based software could do?  
Any suggestion is appreciated!
Relevant answer
Answer
Dear Xue,
If you don't have a fieldmap, you might try to use the SPM12 normalization. This works best when your images are more or less spatially aligned.
-Segment the T1-weighted image (with SEGMENT in SPM12)
-Combine the generated tissue maps to a 4D niftii-file (e.g. with fslmerge)
-Use the combined images as tissue probability maps in the SPM Normalization, 
-As data best select the mean functional image
-When your images are more or less aligned, select 'no affine registration' under 'affine regularisation' .
-Run and hope for the best
Kind regards,
Mathijs Raemaekers
  • asked a question related to Functional Neuroimaging
Question
6 answers
Hello,
I already asked this question in the Brainmap forum which belongs to the GingerALE software. However, so far I haven't got any reply over there and I'm in need of an answer. I hope you can help me out with this.
I'm quite new to GingerALE and I hope to get some help on an issue I've come across. I have done an ALE analysis under supervision before and now I did the first one on my own.
This is how I proceeded:
First, I gathered all relevant coordinates and transformed the ones reported according to Talairach into MNI space using the non-linear transformation by Lacadie et al. (http://sprout022.sprout.yale.edu/mni2tal/mni2tal.html) - as previously recommended by my supervisor. Then I ran the analysis in MNI space. I am currently checking the results and noticed that the Labels and Brodmann areas which are depicted in the Output Excel sheet are different from the ones I get when I type the coordinated into the Web application by Lacadie. For instance, with the MNI coordinates -50, -38, 24 from the ALE output file, I get Left BA40 (inferior parietal lobule) on the Lacadie Website while the coordinates are depicted as Left BA13 (Insula) in the Excel sheet (which is quite a difference).
Should I trust the labels in the ALE output file or should I double check them using another application for identifying the Brodmann areas? If so, do you have any recommendations?
Relevant answer
Answer
I guess I found the reason for the discrepancy between the ALE output file and the Yale web application. In the output file, the nearest gray matter is depicted, while the web application shows me the single point. When testing for both with the Talairach daemon, I get Insula (when searching for the nearest gray matter) and inferior parietal lobule (when searching for the single point) using the same coordinates. 
  • asked a question related to Functional Neuroimaging
Question
12 answers
I'm designing an EEG/fNIRS experiment looking at the neural responses to true vs false sentences, but want to avoid the confounding factor of incongruity.  Is there a measure of incongruity?  How do I distinguish between sentences that are:
1.  false but not incongruous
2.  false and incongruous
Relevant answer
Answer
False sentence is totally wrong i.e. He was arrested for giving false information on his application for a passport. Incongruous mean unsuitable, inappropriate, inconsistent, inharmonious, incompatible, conflicting, discordant. For example: It seems incongruous to have an out-of-shape and overweight editor of a fitness magazine.
  • asked a question related to Functional Neuroimaging
Question
3 answers
Hi all!
I am a budding researcher in the field of fNIRS and its applications specific to cognitive neuroscience.
We are in the process of designing a dual channel CW fNIRS system for signal acquisition.
Can anyone please tell me how and where to incorporate the conversion of raw light intensity time series of nirs data to the concentration changes of oxy and de-oxy Hb using Modified Beer Lambert Law?
 Any help from your side would be greatly appreciated.
Thank you.
  • asked a question related to Functional Neuroimaging
Question
9 answers
I am wondering if anyone can point me to an experiment where the cognitive load can be considered a continuous function.
I have used the delayed item recognition task where cognitive load is the number of letters to remember. This task therefor has a relatively small number of possible levels of load (1 through 6 or 8). Does anyone know of a task where there are more possible increments of cognitive load between very easy and too difficult to answer correctly? I am not restricting myself to verbal short-term memory.
This would be a task that is appropriate for a trial based fMRI experiment.
Thank you
Relevant answer
Answer
Hi Jason,
Have you considered a mental arithmetic task? The difficulty of serial subtractions can be varied by adjusting the length of numbers in the subtrahend and/or minuend. Difficulty might also be manipulated by adjusting the the time allowed to provide an answer, but this could introduce unwanted emotional activation.
Best,
Mal
  • asked a question related to Functional Neuroimaging
Question
4 answers
How can I wire up a cable to allow the Magstim Bistim TMS to produce a trigger to start data recording ? What pin number in the Isolated Interface port (26D pin) on the back is the +output for the trigger?
Relevant answer
Answer
Sorry i'm not familiar with those instruments.  Most companies can answer simple questions by phone or e-mail.  Lacking that, a sales representative may be willing to stop by and give some advice.
Robert M. Reinking
  • asked a question related to Functional Neuroimaging
Question
4 answers
Hi, I am studying about the fMRI processing.
I am trying to figure out how to calculate the "cross-correlation coefficient" in fMRI processing, but the definition of "cross-correlation coefficient" is confusing to me.
Google did not teach me the "cross-correlation coefficient", so I searched "cross-correlation". According to google, cross-correlation is a FUNCTION of lag, showing the similarity between 2 data. But this does not look like cross-correlation "coefficient" because coefficient is supposed to be a representative value, right?
What I got confused about the definition of 'cross-correlation coefficient' is in the paper, Bandettini, MRI, 1991. I found many papers citing this. This paper refers "cross-correlation image" just like to "an image that has Pearson's correlation coefficient pixel by pixel, multiplied by a rate of some magnitudes of vectors.". The two correlation terms sound similar but seem to have some differences making themselves get the different names.
Please give me a hint where I can clarify about the cross-correlation coefficient fMRI processing, or any papers helpful.
Thank you, in advance, for any helps :)
  • asked a question related to Functional Neuroimaging
Question
1 answer
Is there any specific reason why auto-gain adjustment value is arranged as 4000?
Relevant answer
Answer
Yes, this is 4000mV at detector (due to light intensity) that goes beyond the Analog to Digital (A/D) converter limit and will results in saturation for higher values. You need to adjust LED current and brightness to lower the light intensity at the detector.
  • asked a question related to Functional Neuroimaging
Question
7 answers
I'm recently working with ERPlab and I'm looking for the best way to deal with ocular artifacts (I'm definitively not an expert...).
I have two EOG : one at the canthus the other below the same eye. I saw that there is several solutions through artifact detection : step-like, moving window, voltage threshold, etc...My first idea was to use step-like and/or moving window but I don't know if it's the best way ?
I also wonder if I should perform any channel operation with my EOG electrodes or not ? 
Any idea/Advice ?
Relevant answer
Answer
in my opinion the best way is to have enough trials in order to have the possibility to compute statistical analyses only on the remaining artefact-free trials....
  • asked a question related to Functional Neuroimaging
Question
5 answers
Hi all,Say you are modelling a task using a first-level model in SPM. The task consists of three conditions: congruent, incongruent, and control. When setting up the contrasts for this model, how would one properly weight the conditions for the following contrast: [congruent + incongruent] > control? Would this be 0.5 0.5 -1? On a related note, would the contrast [congruent + incongruent] > baseline be 0.5 0.5 0, or 1 1 0? I am confused as I've read that positive (and negative) contrasts should sum to 1, but I've seen people use contrasts such as 1 1 0 0 etc. for certain contrasts, which obviously do not sum to 1. Any input appreciated!
Relevant answer
Answer
Hi
If you want to model the contrast (congruent +incongruent)>baseline using a Tcontrast you could use either the vector 1 1 -2 or 0.5 0.5 -1 and if you want to model the opposite (i.e. baseline>(congruent+incongruent)) you have to change all signs in the expression (i.e. -1 -1 2 or -0.5 -0.5 1). When you see contrasts as 1 1 0 or something like that, they could be F contrasts that are a little bit different and are used to test a hypothesis about general effects, independent of the direction of the contrast!
Hope this helps!
  • asked a question related to Functional Neuroimaging
Question
7 answers
Hello everyone,
I want to know what is different between AFNI and SPM? Would you please tell me the pros and cons of each of them? And describe the applications of them?
I want to say thank you in advance for your kind support.
Relevant answer
Answer
hi there Muhammad,
Depends what you want to do. I've used both SPM and AFNI a lot for fMRI. In my opinion AFNI is more versatile in terms of customizing an analysis. It has many more tools and functions for visualizing your data at an astoundingly detailed level. I believe that it's stronger with resting state analyses (see ANATICOR, i.e. Hang Joon Jo, 2010) and with nonlinear registration (i.e. 3dQwarp) and can do many advanced kinds of regressions (one may need 'R' installed, however). Although afni_proc.py has streamlined and standardized many kinds of analyses in AFNI, it perhaps requires more a little more tcsh scripting than your average SPM user uses Matlab scripting. SPM has some analyses available that no other package has (like dynamic causal modeling) and yes it's true there are many extensions that the SPM community has contributed ( http://www.fil.ion.ucl.ac.uk/spm/ext/ ). SPM's also stronger with modalities other than fMRI. However, it doesn't integrate as fluidly with large parallel computing clusters as AFNI does. Also the packages have slightly different 'ideological foci' in that SPM's authors and community is a little heavier on computational modeling than AFNI's, which in turn is a little more focused on very carefully looking at and understanding one's data. Because SPM's community is larger there are a few more workshops available around the world to learn it. Both have active listservs. After using SPM throughout my doctorate, I was quite pleased with AFNI once I got a hang of it during my postdoc. I plan to return to SPM for certain things, however. Like I said, depends on what you want to do.
-Salvatore (Sam)
  • asked a question related to Functional Neuroimaging
Question
3 answers
Everyone is on and about neuron image analysis in 3D (stacks of 2D images). And the reasons are legitimate (reconstruction in 3D is general, it is a standard). I wanted to ask is how much in practice scientific labs work with 3D data. It seems as most of the neuron morphology analysis in practice is actually carried out with in-vitro samples that are 2D. Do you have any reference where it would be possible to see the statistics - how many studies involve 2D and how many 3D neuron reconstructions to make conclusions in neurobiology. Would anyone have anything to add regarding their own opinion/experience? 
Relevant answer
Answer
Dear  Miroslav:
 I will give an indirect answer to your question. In Remote Sensing, we always use the stacked images in the form of multi-spectral and hyper-spectral images and there is a plethora of literatures that can be adapted to your field of study. Neuron or Neural networks are computational models to capture the underlying dependency in any data set in the form of supervised or unsupervised modes of learning. Hopefully this answer may shed some light to your question.
Regards,
Gamal
  • asked a question related to Functional Neuroimaging
Question
8 answers
Brain storming here...
What do people think would be the ideal experiment to measure functional plasticity with fMRI?
The idea is to have a brain derived metric of an intervention that has physiological meaning.
What would be the metric derived from the fMRI signal? I am looking to go beyond voxel counts of suprathreshold activity. These metrics are based on statistical thresholds and therefore sensitive to thresholds and unmodeled noise. What could be an ideal experimental stimulus and then an ideal analysis?
Relevant answer
Answer
Very good question - I'd say it depends on your definition of plasticity and the response you expect. A few thoughts -
If you expect additional regions to be recruited or existing regions to be more strongly recruited, I agree suprathreshold counts and some other measures are problematic, one approach we've tried is SPM's multivariate Bayesian decoding:
Connectivity approaches approaches can also be useful, see our recent discussion of the related issues of compensation and reorganisation:
Lovden and others (2010) also have provided some definitions of plasticity in relation to training interventions and the kinds of neural responses to be expected:
  • asked a question related to Functional Neuroimaging
Question
6 answers
I am interested in correlating connectivity results obtained with seed correlation analysis to behavioral measures taken outside the scanner. I'm looking for some "best practices" on accomplishing this.
My initial thought is to take single-subject level r-coefficients, put them in SPSS and simply do a bivariate correlation to each of the behavioral measures I have. However, I'm looking at more than 20 regions, making this option rather tedious to set up. I could also just look at correlating connectivities that are significantly different between groups after a t-test, but I would still end up with quite a few correlations to get into SPSS.
Does anyone have any suggestions for making this easier, perhaps with a MATLAB-based script they know of? Is there a more appropriate method of correlating connectivity results to behavioral data outside the scanner?
Relevant answer
I would recommend using the CONN functional connectivity toolbox, which allows you to view and export connectivity results with ease, and which converts r-values to normalized z-scores.
  • asked a question related to Functional Neuroimaging
Question
3 answers
Have there been any functional imaging or similar studies that used a paradigm to produce internal conflict and identify brain regions involved?
Relevant answer
Answer
As David mentioned, what do you mean with intra-psychic conflict? For example the dorsal anterior cingulate cortex (dACC) has been shown to be associated with conflict monitoring.
  • asked a question related to Functional Neuroimaging
Question
1 answer
I want to study aneurysm growth (in volume) in a set of subjects, based on Time of Flight images (MR) images at two time points.
For each subject, I want to do a rigid registration between time of flight images as well as a segmentation of the cerebral vascular system. And then, I want to compare the two registered segmentation volumes in order to quantify the aneurysm growth.
For doing this, some teams used in-house registration tools (not distributed as far as I know, AnToNIA for ex) and the registration is performed using a volume of interest around the aneurysm sac (and not using the whole brain).
I currently know free registration tools not specific to aneurysms (FLIRT of FSL for exemple: http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT). Additional question: Do you think these kind of methods is appropriate to do "aneurysms registration" ? 
Thanks in advance,
Relevant answer
Answer
If you are willing to get into a bit of programming, you could try the vmtk libraries (http://www.vmtk.org/). The authors would probably mix in 3d Slicer for free registration and visualization. They seem now to have a commercial offering which presumably makes life simpler, but not free. 
  • asked a question related to Functional Neuroimaging
Question
4 answers
Dear All, 
I have a question related to how can I use of AFNI for real time fmri because of the data format
I'm receiving data from the scanner in the form of Par/Rec and I don't know which format should I use for AFNI real time ??
as I saw from the demo of AFNI data were in the form of BRIK/HEAD.
so I was wondering is it the only accepted format ?? or what are other formats that can be used for realtime AFNI ??
also if not then what can I do to change the format of each volume from par/rec to BRIK/HEAD??
Thanks 
Relevant answer
Answer
The PAR/REC format is proprietary to Philips, however, I know that this is also possible to tell the system to output nii.  
  • asked a question related to Functional Neuroimaging
Question
3 answers
Hi all ! I have 100 patients with a pASL sequence and would like to get some information about the CBF. Is this actually possible with just one TI ? Can I somehow calculate an arterial transit time ? Because if I just measure at one time-point, I do not know whether the bolus has already passed or not arrived yet, right ? Is it only useful for grey matter or also white matter ?
Thanks for any help !
Relevant answer
Answer
Thank you Marco and Wanyong for your answers !
  • asked a question related to Functional Neuroimaging
Question
6 answers
I have confocal images of immunostained adult Drosophila brains, and I want to find out more about the neurons that are labelled in my images. I am looking for an online tool where I could ideally select a small brain area and get links to published/database info about the function of specific neurons in that brain area. Could anyone please suggest the best way to do this? Thank you!
Relevant answer
Answer
Virtual Fly Brain were certainly setting up to do something like that a couple of years ago. I've not looked into it too much since, but you can try it out with the link below
  • asked a question related to Functional Neuroimaging
Question
7 answers
We have performed group ICA in two groups of patients (BOLD fMRI data). One group is controls, one is with very severe developmental brain abnormalities (mixed group). In controls, ICA revealed 6 components (mostly bilateral), while in the diseased group, we get 30-40 components on the group level, and the components are small and focal. Data preprocessing is the same, data quality is also the same (although the brains themselves might show some anatomical heterogeneity as well).
I am very curious how this excessive number of IC can be interpreted. We may assume that the diseased group have impaired brain functioning, even no brain functioning (e.g. due to neural migration disorders) at the respective areas.
Thank you.
András
Relevant answer
Answer
There is very little that the number of components found could tell you about brain functioning or connectivity, since many of the components could be related to noise, artefacts or be vascular components. You should take some considerations into account. Did you perform group ICA with temporal concatenation? Did you use automatic estimation of the number of ICs?
If you really want to compare the integrity of the resting state networks in a group of patients vs. controls your approach is not adequate.  I suggest that you go for a entire sample group analysis, and after identification of the real resting state networks you apply a two-groups differences analysis. Take into account structural differences into account. In the end, differences found would reflect differences in network integrity/pattern. 
  • asked a question related to Functional Neuroimaging
Question
10 answers
In the analysis of fMRI data, many use small volume correction (SVC, as implemented in SPM) to restrict their search area to a given region of interest. To my understanding, one can look at both cluster-level and voxel-level statistics within SVC. However, the authors of several articles I've read do not specify if they used cluster or voxel-level statistics.
Do any of you know how people use SVC in this regard? Am I wrong in thinking that both voxel and cluster-level statistics are feasible statistics in SVC? By looking at the t/F values of several papers, I get the impression that most use cluster-level inferences in SVC, but I don't understand why they would not specifically state this, unless voxel-level statistics in some way is not suitable for SVC...
Any input appreciated!
Relevant answer
Answer
It makes no sense to use a "small volume" that includes parts of the image where brain activation cannot possibly occur--i.e. in white matter, bone or CSF. Thus the spherical "small volume" normally used in SPM is fundamentally inappropriate, since it would need to be less than 3 mm diameter to not include WM, bone or CSF. It is greatly preferable to use a shaped small volume that includes only grey matter, from one specified and well-defined cortical area, or one deep nucleus. This can only be achieved when a suitable individual-brain cortical and deep brain parcellation has been performed. See my recent paper 'Comparing like with like: the power of knowing where you are' (Turner R, Geyer S. Brain Connect. 2014 Sep;4(7):547-57) to see further details of this approach, which takes advantage of the unique sensitivitiy of MRI to details of myeloarchitecture and iron distribution. To use such small volumes considerably increases the experimental power, and enables well-grounded averaging across subjects and comparison between subject groups. Once this approach has been widely adopted, we will be able to place far more confidence in fMRI findings, in any type of study
  • asked a question related to Functional Neuroimaging
Question
1 answer
Hi, I wanna start a new research for developing machine learning methods for Brain Function Analysis, especially for analyzing the cortex of Human Vision. Would you please help me to find some significant papers and fMRI data sets, which are related to mentioned subject? 
Relevant answer
Answer
Hello
Dear friend, 
See This links. I hope be useful for you.
Best Regard
  • asked a question related to Functional Neuroimaging
Question
14 answers
I need a file that I can download use for data analysis (basically automatically determining what vascular distribution a stroke has occurred in).
Relevant answer
Answer
You may be interested in these 4 entry points. Contacting these authors should narrow your search.  Be also aware of the variability of the vascular territories.
I am also looking at some similar material.
Ref1.  probabilistic map produced by Michel Dojat in Grenoble france (GIN, Grenoble Institut des Neurosciences INSERM : U836). The digital Atlas of the Blood supply territories of the brain (BST), derived from the 12 printed serial sections in the axial plan developed by Tatu et al. The atlas fits the Talairach space, this 3D atlas is used to determine the stroke subtype
see : Yacine Kabir, Michel Dojat, et al. Multimodal MRI segmentation of ischemic stroke lesions. Conf Proc IEEE Eng Med Biol Soc. 2007 ; 2007: 1595–1598
Ref. 2: the digital probabilistic map of PCA infarcts produced by Thanh G. Phan et al (Digital Map of Posterior Cerebral Artery Infarcts Associated With Posterior Cerebral Artery Trunk and Branch Occlusion - Stroke 2007;38;1805-1811). They register their PCA lesions with the MNI template. It is not a vascular territory map but a map of probability of stroke.
Ref. 3: Another probabilistic map for ICA produced by Jae Sung Lee et al in Seoul.: Probabilistic map of blood flow distribution in the brain from the internal carotid artery. NeuroImage 23 (2004) 1422– 1431.
Ref. 4: Analysis of ischemic stroke MR images by means of brain atlases of anatomy and blood supply territories. W Nowinski et al. Academic Radiology. 13, 8, Pages 1025–1034, August 2006. Their approach is a atlas-to-scan transformation (mapping), quite the opposite of what you may want to do (mapping each scan to the registered atlas).
  • asked a question related to Functional Neuroimaging
Question
5 answers
I am looking to dissociate the functional role of the hippocampus and Striatum in humans and am currently searching for potential tasks that have been used in previous studies. I am particularly interested in the dissociation of stimulus response and action outcome learning.
Thank you very much for your help.
Relevant answer
Answer
This is the review article that contains information and references of importance. Note that medial and lateral striatum are different in terms of anatomy, physiology, and behavioral function.
  • asked a question related to Functional Neuroimaging
Question
3 answers
In a memory experiment, what is preferable to use: images of faces or objects? Some people say that faces are problematic because they are relevant stimuli that are not processed like the rest of stimuli and it could introduce non-memory-related  activations in neuroimaging data. We are not interested in exploring emotion or social cognition. 
Relevant answer
Answer
Hi Carmen,
faces are always problematic due particularities of visual perception. I think you may find interesting these references:
Little, A., DeBruine, L., & Jones, B. (2005). Sex-contingent face after-effects suggest distinct neural populations code male and female faces. Proceedings of the Royal Society B: Biological Sciences. 272, 2283 – 2287.
Loffler, G., Yourganov, G., Wilkinson, F., & Wilson, H. R. (2005). fMRI evidence for the neural
representation of faces. Nature Neuroscience, 8(10), 1386 –1390.
Webster, M. A., & MacLeod, D. A. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society B, 366, 1702–1725.
Carbon, C.C., Grüter, T., Weber, J., Lueschow, A. (2007). Faces as objects of non-expertise: Processing of thatcherised faces in congenital prosopagnosia. Perception, 36, 1635 – 1645.
Gauthier, I., & Tarr, M. J. (1997). Becoming a `Greeble' expert: Exploring mechanisms for face recognition. Vision Research, 37, 1673 – 1682.
Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483 – 484.
Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428, 557 – 561.
If you can't find any of them, please don't hesitate to contact me.
  • asked a question related to Functional Neuroimaging
Question
9 answers
Hi all,
I am currently analysing data from an fMRI study, with a 2x2 between-within design. Patients vs controls reflect the between factor, and congruent and incongruent trials reflect the within factor (this is a Stroop task).
After seeing an interaction effect between the two factors in several brain clusters, I wanted to extract the beta-weights from these clusters to explore the underlying effects.
I used Marsbar to extract the raw beta-weights from congruent and incongruent trials, separately (so no contrast between conditions performed), and plotted the results for my two groups.
Many of the resulting beta-weights are negative, and I wonder what this means? Does it reflect a sort of 'deactivation'?  
Any input will be highly appreciated!
Relevant answer
Answer
Most likely, in your case, when you run a general linear model (GLM) analysis, you are attempting to predict the voxel signal using a vector of onsets convolved with some hemodynamic response function. It's fundamentally the same as when you are carrying out any other linear regression: You are predicting some variable (Y) as a function of a predictor variable (X): Y = B(X)+E -- Y is a function of X, plus some variance, (E). B is simply the coeffcient of the predictor variable that leads to the best estimate of Y. In the case of fMRI, X is the idealized time course for the onsets of a particular condition. When B is high for a voxel, that means that the activation for that voxel closely follows the idealized time course for that condition. If B is near 0, that means there is little to no relationship between voxel activity and the idealized time course. B can also be negative, meaning that it is inversely related (e.g, whenever that particular condition occurs, activation actually decreases).
  • asked a question related to Functional Neuroimaging
Question
1 answer
I wonder if anyone has ever experienced running graph theoretical (GT) analysis for task fMRI. If so, what are the general recommendations, precautions, pitfalls?
For instance, given that for this kind of analysis long sessions are strongly recommended, do you suggest to acquire images during a particular task within one large block, or do you think it is still better to use a regular block design, then "cut" the data and reconstruct connectivity matrices for different conditions separately? What would be the recommended block length then?
Relevant answer
Answer
I think the co-activation network across participants might be the most simple and direct approach. This method seems to apply to any fMRI design.
  • asked a question related to Functional Neuroimaging
Question
6 answers
When we analyze the fMRI signal how do we combine it with MRI. What are the big challenges? Is it noise, big data, what is the interdisciplinary between neuroscience and machine learning?
Relevant answer
Answer
Yes, fMRI analyse time series data. The conventional approach is to apply general linear modelling to all voxel time series and check which voxel(s) have a significant correlation with the contrast-of-interest. This approach is univariate in nature, i.e. the analysis is performed on voxel-by-voxel basis.
However, there is a new trend to consider signals from multiple voxels as a pattern and feed them into machine learning algorithms. Using machine learning algorithms on fMRI BOLD or EEG/EMG signals is a very new and active field in the imaging community. People had successfully made classification for clinical diagnosis, predict active brain states using activation paradigm (brain decoding), and even develop brain-machine interface with real-time fMRI.
Just a few seminal papers you probably would interested in:
Soon, C.S., Brass, M., Heinze, H.J.& Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience 11, 543-5.
Reconstructing Visual Experiences From Brain Activity Evoked by Natural Movies
Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant (Current Biology 2011)
  • asked a question related to Functional Neuroimaging
Question
4 answers
Perhaps by using a box or sphere, centred around particular coordinates, or using anatomical features as boundaries?
Relevant answer
Answer
I have an MNI one. Feel free to email me and I can send it.
  • asked a question related to Functional Neuroimaging
Question
7 answers
We are currently testing out the best way to do fMRI studies in patients with epilepsy. It would great if you could recommend some of the established tests for auditory comprehension paradigm.
Relevant answer
Answer
I am assuming you're looking to lateralize language for presurgical planning? One simple test, that has been fairly successful includes flashing a description of an object for the patient to read. His/her response will be to silently name the object to themselves while in the MRI. The control portion of the test could be nearly anything, but usually staring at a cross on the screen of their goggles works well. A good description of the task can be found in "Language dominance in partial epilepsy patients identified with an fMRI reading task" Neurology (2002) 59, 256-265. Hope this helps!
  • asked a question related to Functional Neuroimaging
Question
14 answers
What kinds of problem is it useful for? How difficult is it to measure functionally important metabolites? Is there any free software out there? What is the best way to get started? I am a beginner. I have done a single voxel pilot run, which gave a nice spectrum using Siemens' own software, but what is the learning curve from here?
Relevant answer
Answer
Dear Michael,
This great paper and its references should give you a good start into magnetic resonance spectroscopy:
Rae, Caroline D. „A Guide to the Metabolic Pathways and Function of Metabolites Observed in Human Brain (1)h Magnetic Resonance Spectra“. Neurochemical Research 39, Nr. 1 (Januar 2014): 1–36. doi:10.1007/s11064-013-1199-5.
Free Software exists, e.g. http://tarquin.sourceforge.net/
Many groups use LCmodel since it gives a high reprodicibility (unfortunately the price is quite high, too: http://s-provencher.com/pages/lcmodel.shtml)
All the best
Steffen
  • asked a question related to Functional Neuroimaging
Question
33 answers
I am using the newest version of SPM which gives coordinates in MNI space. However, I cannot seem to find an online tool that will allow me to enter these coordinates to view the brain location.
I was using the following website (http://www.talairach.org/applet/) - but I am unsure if I need to convert or correct anything. Also, I am wondering why many of these online tools do not label any sulci?
Relevant answer
Answer
A quick tool for looking up anatomical brain locations using MNI coordinates is MRIcron (http://www.nitrc.org/projects/mricron), a free and highly useful tool. Within the program, you can open one of the pre-installed templates such as the AAL (automated anatomical labeling atlas) and use the pull down "view" menu to enter your MNI coordinates. It will show you the location and the anatomical name based on the the atlas. It comes with a Brodmann atlas as well.
  • asked a question related to Functional Neuroimaging
Question
4 answers
I will be using LONI Pipeline and Brainsuite to process these.
Relevant answer
Answer
Please contact Prof. Michael Deppe in Münster, Germany via
  • asked a question related to Functional Neuroimaging
Question
4 answers
The differential diagnosis of dementia includes Alzheimer's disease, other degenerative disorders, and some reversible etiologies. Positron emission tomography is only suggested as a test in the differential diagnosis, between AD and frontotemporal dementia. Diagnostic modalities also require incorporation of a full clinical assessment, and demand a level of expertise at interpretation. Several of the misinterpretations have had significant clinical implications, in addition to the serious impact of patients being told inaccurately that they had AD. What's the big rule of PET scan in Alzheimer's diagnosis?
Relevant answer
Answer
PET scan is not the final answer in Alzheimers disease. The information that we get from the history as how the disease started cannot be provided by any investigational modality. Moreover even after PET scan we may not be able to differentiate between frontal variant Alzheimers disease and behavioural variant FTD. Even in those cases of FTD which satisfied the diagnostic criteria with supportive imaging and PET findings, a significant number of cases turn out to be alzheimers disease finally on autopsy. So everything is not black and white here.
  • asked a question related to Functional Neuroimaging
Question
12 answers
Dear Colleagues,
I would like to discuss the following questions pertaining to the so-called fMRI data scrubbing (http://www.humanconnectome.org/hosted/docs/Power-et-al-NeuroImage.pdf):
1. How and when do you think it is better to do scrubbing (e.g., Is it better to remove spikes before/after bandpass filtering, before independent component analysis or prior to functional connectivity analysis [for example, with mancovan]?)
2. How many volumes associated with "bad event" do you usually remove? (e.g., only 1 volume, or also neighbouring volumes)
3. How do you threshold your scrubbing? (e.g. FD-threshold=0.5mm)
4. Do you interpolate between your time-points afterwards? If so - how? (Nearest Neighbour, Linear, Cubic Spline)
5. When do you exclude a subject from the analysis completely? (e.g., how many volumes one should miss in %s?)
Any comments and suggestions are very welcomed.
Relevant answer
Answer
1. I do "Motion scrubbing" or "Motion censoring" in a non-destructive manner. By that, I mean that I never delete the high motion volumes but I ignore them in my regression analyses (nuisance regression, connectivity calculations, or activation calculation following your tasks...).
2. Depending on your TR, I would either ignore the one TR or the one TR plus one before.
3. For Task-fMRI, I have seen thresholds ranging from 1mm to 0.5mm while for resting state my choice is usually 0.25 or 0.2mm
4. I believe that the function I use for my fMRI analysis only ignores the time without removing them so no interpolation seems necessary. I believe this is true because we use scrubbing based on volumes and not voxels.
5. I think it is getting more and more common to exclude a subject if more than 50% of the volumes are censored and/or less than 5 minutes of data is remaining (based on resting state fMRI). Now if you have only have a 5 minute run, I would probably look at the distribution of how many volumes are censored for each subject and pick a threshold that keeps most of the subject (I don't think I would go lower than 75% of the run remaining).
  • asked a question related to Functional Neuroimaging
Question
2 answers
Limiting motion through training, reducing the impact of motion on fMRI signal or identifying and removing motion artifact seem to be valuable methods for avoiding biases due to verbal responses in neuroimaging studies. I would need at least one reference that would review the advantages and limitations of these methods.
A reference that would, on the contrary, mention that avoiding verbal responses in the scanner is also very welcome.
Relevant answer
Answer
Below is one of the better papers comparing covert and overt naming in the scanner. Relatively artifact free fMRI data can be obtained with overt naming especially when using an event-related design:
  • asked a question related to Functional Neuroimaging
Question
1 answer
I would like to study some variables using an fMRI machine. I have tried to seek assistance from research diagnostics centers and hospitals but to no avail. Could you please guide me with respect to the logistics of fMRI research?
Relevant answer
Answer
Procedures vary from site to site. You can learn about the procedures at our 7T MRI site here: http://www.imago7.eu/ENGricerca.html
In short, researchers that have interest in using the resources of Imago7 are invited to present their project to the Scientific Committee for preliminary feasibility assessment: segreteriaRM7T@imago7.eu
Then, fees will apply depending on the type and number of experiments.
Good luck!
  • asked a question related to Functional Neuroimaging
Question
2 answers
My aim is to compare FMRI data of a control and a patient to detect the abnormalities in the white matter.
Relevant answer
Answer
Please provide operatin system and error messages.
  • asked a question related to Functional Neuroimaging
Question
11 answers
I've been hearing a lot about how Bayesian approaches are now being applied to fMRI datasets, and recently found out that SPM8 offers this capability natively. Can anyone recommend first why this approach would be a useful alternative or addition to more traditional analyses? My understanding is that the Bayesian approach allows you to make a spatial prediction i.e. that you should see activation in this set of regions for condition A relative to condition B. Anyone who's used the technique, I'd really appreciate your perspective.
Relevant answer
Answer
and more..
Bayesian model selection for group studies
Klaas Enno Stephan, Will D. Penny, Jean Daunizeau, Rosalyn J. Moran, Karl J. Friston. NeuroImage 46 (2009) 1004–1017
[... Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of
competing hypotheses about the mechanisms that generated observed data.]
enjoy :D
Alfredo
  • asked a question related to Functional Neuroimaging
Question
4 answers
As I am planning to do a study on the integration auditory and visual stimuli using fMRI.I am looking for suggestions for the basic recording protocol for the auditory and visual stimulation. By this I mean, I request all of you to please suggest the number of slices required, TR time and the other parameters that need to be set for recording fMRI.
Relevant answer
Answer
Dear Anoop,
The recommended protocol depends on the capabilities of your scanner (field strength and type of gradients) and on what you want to look at and how you want to analyze your data. Discuss this with your local physicist, if possible. Here is a set of settings that I think one can consider pretty standard. In a crossmodal audiovisual design you are probably interested in whole brain coverage (visual cortex, auditory cortex, but also frontal and parietal areas).
I would go for an EPI sequence with 3x3x3mm resolution, which you can achieve with 3mm slice thickness, and a field of view of 192mm (if you use a 64x64 image acquisition matrix, which is most likely among the standard sequences on your scanner). Put 10-15% gap between the slices.
Next check with a participant inside the scanner how many slices you will need to cover the entire brain. 32 slices should be alright. Keep the number of slices fixed for all participants in your study.
Next TE: depends on the field strength of your scanner, on a 3T 30ms is a good start, on 1.5T I would go a bit higher.
TR: Depends on the type of design. In an event-related design you may want to strive for a shorter TR (2000ms is a good value to start), but if you use a block design, you can afford longer TRs such as 2000ms to 3000ms (longer TR gives you more signal).
Flip angle (FA): Depends on the T1 of the tissue and on TR (with shorter TRs requiring smaller flip angles for optimal performance). A reasonable setting for TR=2000 is FA = 77 deg; for TR = 3000 FA = 85 deg.
This is just a start, there are many ways to optimize this.
Best,
Jens
  • asked a question related to Functional Neuroimaging
Question
3 answers
Covariates in SPM: Stimulus and Reaction Times (RT)
Relevant answer
Answer
Depends a bit on what you want to do and know. If you want to control for the effect of covariates on single subject data, then it's better to specify these in a 1st lvl design. If you want to know the effects of covariates in a group analysis, you would specify those in a 2nd lvl design. However, if there are large differences in RT between subjects, you may wonder, whether the mean RT says anything useful at all.
Cheers,
Wouter