Science topic
Functional Neuroimaging - Science topic
Explore the latest questions and answers in Functional Neuroimaging, and find Functional Neuroimaging experts.
Questions related to Functional Neuroimaging
Hi all,
I've found that interstimulus intervals (rest conditions) can be about 4 to 9 s in event-related design using fNIRS (compared to block design, which needs 15 to 20 s intervals between stimuli). However, I've not been able to find the detailed reasoning behind this. Does anyone have related research or information I can look into?
Thank you in advance.
I have a large longitudinal dataset comprising both healthy and clinical groups. The sample size for the clinical subsample drops off much more vs healthy subjects. There is missing data throughout the dataset, and across different measure types, and the amount of missingness varies by measure and time points. What are some of the best ways to address missing data for longitudinal modelling in neuroimaging, functional neuroimaging, and behavioural data? What kind of imputation methods are most robust?
I am trying to programming fMRI (BOLD) data processing via SPM12 using MATLAB. I have four stimuli that each participant should note and respond according to each stimulus. When I want to set conditions for 1st-Level, in data some runs lack some stimuli (some of the stimuli have not been used). for example, there is no second-stimulus in run-2, but in others have been used.
Now, should I define an empty vector/zero value for "Onsets of the Condition" or in general I should not set a second-stimulus condition for run-2?
Dear colleagues.
Hello, I am currently studying brain connectivity in the disease group.
Recently, I constructed a connectivity matrix from each participants' neuroimaging data (diffusion tensor imaging), then ran the edge-wise correlational analysis with the neuropsychological score using a tool similar to Network-based statistics (a.k.a NBS; Zalesky et al., 2010).
As a result, I got an edge-level network consisting of 10 edges with 19 nodes.
Those significantly denoted edges are not connected to each other but identified as single edges.
Conventionally, I used graph theoretical measures such as a degree or betweenness centrality for defining hub node (i.e. hub region = betweenness centrality + 1SD within the nodes of the network).
However, in this case, I have an edge-level network that is hard to say is clustered or connected but consisted of multiple single edges.
From here, I want to specifically emphasize more significant edges or nodes within the identified network as a hub region (well it is hard to say it is a hub, but at least for easy comprehension), but I am quite struggling with what approach to take.
All discussion and suggestions are welcomed here.
Or if I am misunderstanding any, please give me feedback or comments.
Thank you in advance!
Jean
My friend is looking for coauthors in Psychology & Cognitive Neuroscience field. Basically you will be responsible for paraphrasing, creating figures, and collecting references for a variety of publications. Please leave your email address if you are interested. 10 hours a week is required as there is a lot of projects to be done!
Wish to analyse data for rest-state functional connectivity and for DTI
I need a CT images database of brain (both normal cases and abnormal cases ) for calculating the midline shift in CT images.
I have been working with FSL software (melodic & dual_regression) to parcellate intrinsic connectivity networks both at a group and subject level. I successfully ran dual_regression and have the outputted z-maps for each subject and component. I want to create a summary statistic for each outputted z-map so that I can compare differences across participants using a dimensional framework. I am not interested in analyzing subgroups, even though all the FSL documentation is all about group differences.
I am wondering how I could calculate a summary statistic for each z-map that represents the network strengthen/integrity of a given subject. I basically want to know how over-expressed or under-expressed is a given network is for each subject. I want to use this approach to test a hypothesis regarding individual differences in symptoms of depression. I read this in a few books and papers, but no one has explained how to quantify these subject-level networks from nifti files into numeric summary scores that could be inputted into a multiple regression.
Attached here is my group ICA output that was manually capped at 30 components (melodic_IC.nii.gz) and a couple dual regression output files (original and z-maps) from two subjects to give you an idea about what I am working with. The 3rd component is of most interest as it most resembles the default mode network. Please help someone!
I'm looking for publications on the alleviation of fear (or any negative emotion) not from a therapeutic, but rather a functionnal, cognitive approach. Ideally some fMRI recording of subjects experiencing a feeling of relief.
If the reverse has been done on the feeling of disappointment, that'd be great too.
I'll take anything from cognitive neuroscience, neuropsychology, anatomical clinical method, and cognitive psychology.
My eternal gratitude to whomever can help.
Hello,
I am writing my thesis on the effects of mindfulness meditation on reading in dyslexic children and I am looking for research on how reading works in the brain (what areas of the brain are activated, what mechanisms are involved...) that I would like to relate to the areas of the brain that are activated during meditation.
I know that now, thanks to neuroimaging, we have more accurate data but I can't find any solid documents on it.
Thank you for reading my message and I hope to find answers here.
Good continuation in your work and research.
Sandrine BRASSE
Bonjour,
Je fais mon mémoire sur les effets de la méditation de pleine conscience sur la lecture des enfants dyslexiques et je cherche des recherches sur le fonctionnement de la lecture dans le cerveau (quelles zones du cerveau sont activées, quels mécanismes sont mis en jeu ...) que j'aimerais mettre en relation avec les zones du cerveau qui sont activées pendant la méditation.
je sais que maintenant, grâce à la neuro imagerie, nous avons des données plus précises mais je ne trouvent pas de documents solides là dessus.
Merci d'avoir lu mon message et j'espère trouver ici des réponses.
Bonne continuation dans vos travaux et vos recherches.
Sandrine BRASSE
Most of researchers are using fMRI for analyzing dynamic functional connectivity networks, I want to know if it is possible to use EEG as well and if so, what s the advantageous and disadvantageous of using EEG in comparison to fMRI, except temporal resolution of EEG
I am performing Granger Causality analysis on a dataset of fMRI data. I defined a set of regions of the network, which show up beautifully in the RFX-GLM map, and a control, white-matter region, of the same size of the remaining interest regions.
I developed a pipeline of analysis based on MVGC (https://users.sussex.ac.uk/~lionelb/MVGC/html/mvgchelp.html), which considers a number of subjects and runs for the calculation, and exports a matrix of significant connections between the regions defined.
What I struggle to understand is the following: a very statistically strong connection shows up between one of my interest regions and the control one. Even after correcting the results with a number of surrogate methods, the connection persists.
I would like an opinion on this matter! Is the analysis invalidated by this result? Or is there something that could explain a strong connection with a region which is no more than noise and uncorrelated with all other regions?
Thank you in advance!
I'm currently designing a pipeline for the pre-processing of MEG data in Fieldtrip but keep on having some issues with events. Having compared the data loaded in with that loaded in using Brainstorm I've found that FT fails to load in the complete number of samples (and events).
Has anybody else had this problem, and if so how did you fix it?
Thanks
I am running a script based on different batches in SPM on a dataset with 26 participants (2 sessions and 3 runs). This dataset has been analysed before in FSL (2 publications with statistical analysis) and I am doing a reanalysis using SPM12. The code runs smoothly for some of the participants and does the spatial preprocessing and the processing by defining a GLM and then moving on to DCM to extract the DCM.Ep.A (I have modified the sum_run_fmri_spec in config so that it stops asking for a confirmation for over writing the SPM file).
Can this be because for regressing out the WM and CSF I have two different jobs? This is the same when I want to extract the four VOIs.
I have done this according to the sample script in the practical example for rs-DCM.
Unfortunately, it works well with the exception of a few participants/runs.
The error message is:
------------------------------------------------------------------------
Running job #1
------------------------------------------------------------------------
Running 'Volume of Interest'
Warning: Empty region.
> In spm_regions (line 155)
In spm_run_voi (line 63)
In cfg_run_cm (line 29)
In cfg_util>local_runcj (line 1688)
In cfg_util (line 959)
In spm_jobman>fill_run_job (line 458)
In spm_jobman (line 247)
In spDCM_fun_test (line 207)
Failed 'Volume of Interest'
Reference to non-existent field 'v'.
In file ".../spm12/config/spm_run_voi.m" (v6301), function "spm_run_voi" at line 76.
The following modules did not run:
Failed: Volume of Interest
Error using MATLABbatch system
Job execution failed. The full log of this run can be found in MATLAB command
window, starting with the lines (look for the line showing the exact #job as
displayed in this error message)
------------------
Running job #1
------------------
Is there anything that I should do to fix this?
On a different note, I had to modify the code in DCM specification:
Sess = SPM.Sess(xY(1).Sess);
as this was causing errors as xY(1).Sess is 2 at points, and not 1. Therefore, I changed it to:
Sess = SPM.Sess(1);
Is it going to be problematic? And, if yes, is there any other way of fixing the issue?
I have three "for" loops for the dataset, one for the subjects, one for the sessions, and the last one for the runs... I managed to get the DCM.mat for 16 subjects out of 26 (and then it stops for the error I get in the last section). Then I am just taking the average of the DCM.Ep.A for each subject in each session (which I am not sure is the right thing to do - the alternative method can be doing a BMS RFX to find out which model is best and just use the DCM.Ep.A for that model). The data includes samples taken from 26 subjects in two sessions and three runs.
Unfortunately, the RFX BMS does not seem to be working outside the GUI (when I change the values) either. The script generated by the batch seems to be like (I have changed it slightly):
clear matlabbatch;
mkdir([session_folder_name '/func/BMS']);
matlabbatch{1}.spm.dcm.bms. inference.dir = ...
cellstr([session_folder_name '/func/BMS']);
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 1,1} = ...
cellstr([session_folder_name '/func/Run01/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 2,1} = ...
cellstr([session_folder_name '/func/Run02/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.sess_dcm{1}.dcmmat{ 3,1} = ...
cellstr([session_folder_name '/func/Run03/GLM/DCM_DMN.mat'] );
matlabbatch{1}.spm.dcm.bms. inference.model_sp = {''};
matlabbatch{1}.spm.dcm.bms. inference.load_f = {''};
matlabbatch{1}.spm.dcm.bms. inference.method = 'RFX';
matlabbatch{1}.spm.dcm.bms. inference.family_level.family_ file = {''};
matlabbatch{1}.spm.dcm.bms. inference.bma.bma_no = 0;
matlabbatch{1}.spm.dcm.bms. inference.verify_id = 0;
spm_jobman('run',matlabbatch);
However, this gives me an error as well (The Model seems to be empty and I cannot fix it, even though all the variables and the structure in the script above seem to be fine).
On a different note, I was wondering whether there is a way of finding out which model is the best one, other than just looking at the graphs generated using the GUI - I realised that this is a question some of the other SPM users in the list have as well. If not, it seems like I have to run the script for each session and each subject and look at the graphs... which does not seem reasonable. If there is a quantitative way of finding the best model for each session in BMS, then I can easily select the DCM.Ep.A for that model and use it in my further analysis. Also, what if the best model is different for each subject, e.g., model 3 for subject 1, model 2 for subject 8? Can they then be used in the analysis?
Many thanks and apologies for the very long email with millions of different questions!
Thanking you in advance,
Amir
Dear all,
I've been using ROI analysis for fMRI data analysis for a while, to search for only regions of interest supported by previous data or literature.
But SPM has a SVC button with a similar logic behind. So what are the differences between the two approaches? Thanks a lot.
Andy
I want to do neuronal imaging of the PFC of mice and I am in need of head plates I can attach to the head of the animals to keep it stable during my awake experiment. Does anyone know a company who commercially sells those ?
Thanks
I am using psychopy builder and created a code section to have a progress bar, but I can't seem to simplify the code, so that when the progress bar expands, I dont have to keep writing the same code over...
In Begin Experiment I have:
ProgVertices = [[-.5,0],[-.5,.5][-.2,.5],[-.2,0]]
counter = 0
Test1 = visual.ShapeStim(win, lineColor = 'green', vertices = ProgVertices, pos = (0,0), autoDraw = True))
In Each Frame tab I have:
if counter < 10:
core.wait(1)
counter = counter +1
if counter >= 6:
ProgVertices[2] = [0.5,0.5]
ProgVertices[3] = [0.5,0]
Test1 = visual.ShapeStim(win, lineColor = 'green', vertices = ProgVertices, pos = (0,0), autoDraw = True))
This works but I feel like its redundant code if I wanted to update the counter a little every second (as opposed to just at 6 seconds). Any help to simplify the code would be appreciated!
Thanks
I'm interested in pain perception during pregnancy, and wondering if there are any safe and acceptable methods of functional neuroimaging in this population? Thanks.
In computer Science, to validate the work, we use set of benchmark function and compare the results with previous work. In neuroimaging field, how can I validate my work if there is no previous works?
I have some precious data which somehow the functional EPI data shows distortion in the bottom frontal lobe. This mismatch of functional data and structural data will definitely cause the bad performance on the rigid body coregistration, and finally lead to the poor normalization.
So, it that reasonable if I do nonlinear coregistration between the func and struc data? And what matlab-based software could do?
Any suggestion is appreciated!
Hello,
I already asked this question in the Brainmap forum which belongs to the GingerALE software. However, so far I haven't got any reply over there and I'm in need of an answer. I hope you can help me out with this.
I'm quite new to GingerALE and I hope to get some help on an issue I've come across. I have done an ALE analysis under supervision before and now I did the first one on my own.
This is how I proceeded:
First, I gathered all relevant coordinates and transformed the ones reported according to Talairach into MNI space using the non-linear transformation by Lacadie et al. (http://sprout022.sprout.yale.edu/mni2tal/mni2tal.html) - as previously recommended by my supervisor. Then I ran the analysis in MNI space. I am currently checking the results and noticed that the Labels and Brodmann areas which are depicted in the Output Excel sheet are different from the ones I get when I type the coordinated into the Web application by Lacadie. For instance, with the MNI coordinates -50, -38, 24 from the ALE output file, I get Left BA40 (inferior parietal lobule) on the Lacadie Website while the coordinates are depicted as Left BA13 (Insula) in the Excel sheet (which is quite a difference).
Should I trust the labels in the ALE output file or should I double check them using another application for identifying the Brodmann areas? If so, do you have any recommendations?
I'm designing an EEG/fNIRS experiment looking at the neural responses to true vs false sentences, but want to avoid the confounding factor of incongruity. Is there a measure of incongruity? How do I distinguish between sentences that are:
1. false but not incongruous
2. false and incongruous
Hi all!
I am a budding researcher in the field of fNIRS and its applications specific to cognitive neuroscience.
We are in the process of designing a dual channel CW fNIRS system for signal acquisition.
Can anyone please tell me how and where to incorporate the conversion of raw light intensity time series of nirs data to the concentration changes of oxy and de-oxy Hb using Modified Beer Lambert Law?
Any help from your side would be greatly appreciated.
Thank you.
I am wondering if anyone can point me to an experiment where the cognitive load can be considered a continuous function.
I have used the delayed item recognition task where cognitive load is the number of letters to remember. This task therefor has a relatively small number of possible levels of load (1 through 6 or 8). Does anyone know of a task where there are more possible increments of cognitive load between very easy and too difficult to answer correctly? I am not restricting myself to verbal short-term memory.
This would be a task that is appropriate for a trial based fMRI experiment.
Thank you
How can I wire up a cable to allow the Magstim Bistim TMS to produce a trigger to start data recording ? What pin number in the Isolated Interface port (26D pin) on the back is the +output for the trigger?
Hi, I am studying about the fMRI processing.
I am trying to figure out how to calculate the "cross-correlation coefficient" in fMRI processing, but the definition of "cross-correlation coefficient" is confusing to me.
Google did not teach me the "cross-correlation coefficient", so I searched "cross-correlation". According to google, cross-correlation is a FUNCTION of lag, showing the similarity between 2 data. But this does not look like cross-correlation "coefficient" because coefficient is supposed to be a representative value, right?
What I got confused about the definition of 'cross-correlation coefficient' is in the paper, Bandettini, MRI, 1991. I found many papers citing this. This paper refers "cross-correlation image" just like to "an image that has Pearson's correlation coefficient pixel by pixel, multiplied by a rate of some magnitudes of vectors.". The two correlation terms sound similar but seem to have some differences making themselves get the different names.
Please give me a hint where I can clarify about the cross-correlation coefficient fMRI processing, or any papers helpful.
Thank you, in advance, for any helps :)
Is there any specific reason why auto-gain adjustment value is arranged as 4000?
I'm recently working with ERPlab and I'm looking for the best way to deal with ocular artifacts (I'm definitively not an expert...).
I have two EOG : one at the canthus the other below the same eye. I saw that there is several solutions through artifact detection : step-like, moving window, voltage threshold, etc...My first idea was to use step-like and/or moving window but I don't know if it's the best way ?
I also wonder if I should perform any channel operation with my EOG electrodes or not ?
Any idea/Advice ?
Hi all,Say you are modelling a task using a first-level model in SPM. The task consists of three conditions: congruent, incongruent, and control. When setting up the contrasts for this model, how would one properly weight the conditions for the following contrast: [congruent + incongruent] > control? Would this be 0.5 0.5 -1? On a related note, would the contrast [congruent + incongruent] > baseline be 0.5 0.5 0, or 1 1 0? I am confused as I've read that positive (and negative) contrasts should sum to 1, but I've seen people use contrasts such as 1 1 0 0 etc. for certain contrasts, which obviously do not sum to 1. Any input appreciated!
Hello everyone,
I want to know what is different between AFNI and SPM? Would you please tell me the pros and cons of each of them? And describe the applications of them?
I want to say thank you in advance for your kind support.
Everyone is on and about neuron image analysis in 3D (stacks of 2D images). And the reasons are legitimate (reconstruction in 3D is general, it is a standard). I wanted to ask is how much in practice scientific labs work with 3D data. It seems as most of the neuron morphology analysis in practice is actually carried out with in-vitro samples that are 2D. Do you have any reference where it would be possible to see the statistics - how many studies involve 2D and how many 3D neuron reconstructions to make conclusions in neurobiology. Would anyone have anything to add regarding their own opinion/experience?
Brain storming here...
What do people think would be the ideal experiment to measure functional plasticity with fMRI?
The idea is to have a brain derived metric of an intervention that has physiological meaning.
What would be the metric derived from the fMRI signal? I am looking to go beyond voxel counts of suprathreshold activity. These metrics are based on statistical thresholds and therefore sensitive to thresholds and unmodeled noise. What could be an ideal experimental stimulus and then an ideal analysis?
I am interested in correlating connectivity results obtained with seed correlation analysis to behavioral measures taken outside the scanner. I'm looking for some "best practices" on accomplishing this.
My initial thought is to take single-subject level r-coefficients, put them in SPSS and simply do a bivariate correlation to each of the behavioral measures I have. However, I'm looking at more than 20 regions, making this option rather tedious to set up. I could also just look at correlating connectivities that are significantly different between groups after a t-test, but I would still end up with quite a few correlations to get into SPSS.
Does anyone have any suggestions for making this easier, perhaps with a MATLAB-based script they know of? Is there a more appropriate method of correlating connectivity results to behavioral data outside the scanner?
Have there been any functional imaging or similar studies that used a paradigm to produce internal conflict and identify brain regions involved?
I want to study aneurysm growth (in volume) in a set of subjects, based on Time of Flight images (MR) images at two time points.
For each subject, I want to do a rigid registration between time of flight images as well as a segmentation of the cerebral vascular system. And then, I want to compare the two registered segmentation volumes in order to quantify the aneurysm growth.
For doing this, some teams used in-house registration tools (not distributed as far as I know, AnToNIA for ex) and the registration is performed using a volume of interest around the aneurysm sac (and not using the whole brain).
I currently know free registration tools not specific to aneurysms (FLIRT of FSL for exemple: http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT). Additional question: Do you think these kind of methods is appropriate to do "aneurysms registration" ?
Thanks in advance,
Dear All,
I have a question related to how can I use of AFNI for real time fmri because of the data format
I'm receiving data from the scanner in the form of Par/Rec and I don't know which format should I use for AFNI real time ??
as I saw from the demo of AFNI data were in the form of BRIK/HEAD.
so I was wondering is it the only accepted format ?? or what are other formats that can be used for realtime AFNI ??
also if not then what can I do to change the format of each volume from par/rec to BRIK/HEAD??
Thanks
Hi all ! I have 100 patients with a pASL sequence and would like to get some information about the CBF. Is this actually possible with just one TI ? Can I somehow calculate an arterial transit time ? Because if I just measure at one time-point, I do not know whether the bolus has already passed or not arrived yet, right ? Is it only useful for grey matter or also white matter ?
Thanks for any help !
I have confocal images of immunostained adult Drosophila brains, and I want to find out more about the neurons that are labelled in my images. I am looking for an online tool where I could ideally select a small brain area and get links to published/database info about the function of specific neurons in that brain area. Could anyone please suggest the best way to do this? Thank you!
We have performed group ICA in two groups of patients (BOLD fMRI data). One group is controls, one is with very severe developmental brain abnormalities (mixed group). In controls, ICA revealed 6 components (mostly bilateral), while in the diseased group, we get 30-40 components on the group level, and the components are small and focal. Data preprocessing is the same, data quality is also the same (although the brains themselves might show some anatomical heterogeneity as well).
I am very curious how this excessive number of IC can be interpreted. We may assume that the diseased group have impaired brain functioning, even no brain functioning (e.g. due to neural migration disorders) at the respective areas.
Thank you.
András
In the analysis of fMRI data, many use small volume correction (SVC, as implemented in SPM) to restrict their search area to a given region of interest. To my understanding, one can look at both cluster-level and voxel-level statistics within SVC. However, the authors of several articles I've read do not specify if they used cluster or voxel-level statistics.
Do any of you know how people use SVC in this regard? Am I wrong in thinking that both voxel and cluster-level statistics are feasible statistics in SVC? By looking at the t/F values of several papers, I get the impression that most use cluster-level inferences in SVC, but I don't understand why they would not specifically state this, unless voxel-level statistics in some way is not suitable for SVC...
Any input appreciated!
Hi, I wanna start a new research for developing machine learning methods for Brain Function Analysis, especially for analyzing the cortex of Human Vision. Would you please help me to find some significant papers and fMRI data sets, which are related to mentioned subject?
I need a file that I can download use for data analysis (basically automatically determining what vascular distribution a stroke has occurred in).
I am looking to dissociate the functional role of the hippocampus and Striatum in humans and am currently searching for potential tasks that have been used in previous studies. I am particularly interested in the dissociation of stimulus response and action outcome learning.
Thank you very much for your help.
In a memory experiment, what is preferable to use: images of faces or objects? Some people say that faces are problematic because they are relevant stimuli that are not processed like the rest of stimuli and it could introduce non-memory-related activations in neuroimaging data. We are not interested in exploring emotion or social cognition.
Hi all,
I am currently analysing data from an fMRI study, with a 2x2 between-within design. Patients vs controls reflect the between factor, and congruent and incongruent trials reflect the within factor (this is a Stroop task).
After seeing an interaction effect between the two factors in several brain clusters, I wanted to extract the beta-weights from these clusters to explore the underlying effects.
I used Marsbar to extract the raw beta-weights from congruent and incongruent trials, separately (so no contrast between conditions performed), and plotted the results for my two groups.
Many of the resulting beta-weights are negative, and I wonder what this means? Does it reflect a sort of 'deactivation'?
Any input will be highly appreciated!
I wonder if anyone has ever experienced running graph theoretical (GT) analysis for task fMRI. If so, what are the general recommendations, precautions, pitfalls?
For instance, given that for this kind of analysis long sessions are strongly recommended, do you suggest to acquire images during a particular task within one large block, or do you think it is still better to use a regular block design, then "cut" the data and reconstruct connectivity matrices for different conditions separately? What would be the recommended block length then?
When we analyze the fMRI signal how do we combine it with MRI. What are the big challenges? Is it noise, big data, what is the interdisciplinary between neuroscience and machine learning?
Perhaps by using a box or sphere, centred around particular coordinates, or using anatomical features as boundaries?
We are currently testing out the best way to do fMRI studies in patients with epilepsy. It would great if you could recommend some of the established tests for auditory comprehension paradigm.
What kinds of problem is it useful for? How difficult is it to measure functionally important metabolites? Is there any free software out there? What is the best way to get started? I am a beginner. I have done a single voxel pilot run, which gave a nice spectrum using Siemens' own software, but what is the learning curve from here?
I am using the newest version of SPM which gives coordinates in MNI space. However, I cannot seem to find an online tool that will allow me to enter these coordinates to view the brain location.
I was using the following website (http://www.talairach.org/applet/) - but I am unsure if I need to convert or correct anything. Also, I am wondering why many of these online tools do not label any sulci?
I will be using LONI Pipeline and Brainsuite to process these.
The differential diagnosis of dementia includes Alzheimer's disease, other degenerative disorders, and some reversible etiologies. Positron emission tomography is only suggested as a test in the differential diagnosis, between AD and frontotemporal dementia. Diagnostic modalities also require incorporation of a full clinical assessment, and demand a level of expertise at interpretation. Several of the misinterpretations have had significant clinical implications, in addition to the serious impact of patients being told inaccurately that they had AD. What's the big rule of PET scan in Alzheimer's diagnosis?
Dear Colleagues,
I would like to discuss the following questions pertaining to the so-called fMRI data scrubbing (http://www.humanconnectome.org/hosted/docs/Power-et-al-NeuroImage.pdf):
1. How and when do you think it is better to do scrubbing (e.g., Is it better to remove spikes before/after bandpass filtering, before independent component analysis or prior to functional connectivity analysis [for example, with mancovan]?)
2. How many volumes associated with "bad event" do you usually remove? (e.g., only 1 volume, or also neighbouring volumes)
3. How do you threshold your scrubbing? (e.g. FD-threshold=0.5mm)
4. Do you interpolate between your time-points afterwards? If so - how? (Nearest Neighbour, Linear, Cubic Spline)
5. When do you exclude a subject from the analysis completely? (e.g., how many volumes one should miss in %s?)
Any comments and suggestions are very welcomed.
Limiting motion through training, reducing the impact of motion on fMRI signal or identifying and removing motion artifact seem to be valuable methods for avoiding biases due to verbal responses in neuroimaging studies. I would need at least one reference that would review the advantages and limitations of these methods.
A reference that would, on the contrary, mention that avoiding verbal responses in the scanner is also very welcome.
I would like to study some variables using an fMRI machine. I have tried to seek assistance from research diagnostics centers and hospitals but to no avail. Could you please guide me with respect to the logistics of fMRI research?
My aim is to compare FMRI data of a control and a patient to detect the abnormalities in the white matter.
I've been hearing a lot about how Bayesian approaches are now being applied to fMRI datasets, and recently found out that SPM8 offers this capability natively. Can anyone recommend first why this approach would be a useful alternative or addition to more traditional analyses? My understanding is that the Bayesian approach allows you to make a spatial prediction i.e. that you should see activation in this set of regions for condition A relative to condition B. Anyone who's used the technique, I'd really appreciate your perspective.
As I am planning to do a study on the integration auditory and visual stimuli using fMRI.I am looking for suggestions for the basic recording protocol for the auditory and visual stimulation. By this I mean, I request all of you to please suggest the number of slices required, TR time and the other parameters that need to be set for recording fMRI.
Covariates in SPM: Stimulus and Reaction Times (RT)