- Joanna Wolstencroft asked a question:NewWhat research has contributed the most to our understanding of memory?
from animal research, human lesioning methods to neuroimaging...Following
- Erwin Lemche added an answer:9Does anyone know an fMRI compatible experimental task that has continuous levels of cognitive load?
I am wondering if anyone can point me to an experiment where the cognitive load can be considered a continuous function.
I have used the delayed item recognition task where cognitive load is the number of letters to remember. This task therefor has a relatively small number of possible levels of load (1 through 6 or 8). Does anyone know of a task where there are more possible increments of cognitive load between very easy and too difficult to answer correctly? I am not restricting myself to verbal short-term memory.
This would be a task that is appropriate for a trial based fMRI experiment.
Maybe you would like to have a look at our paper that just came out:
Lemche, E., Sierra, M., David, A.S., Phillips, M.L., Gasston, D., Williams, S.C.R., Giampietro, V.P. (2015). Cognitive load and autonomic response patterns under negative priming demand in Depersonalization-Derealization Disorder. European Journal of Neuroscience doi:10.1111/ejn.13183 (West Sussex, U.K., John Wiley)Following
- Janani Arivudaiyanambi added an answer:3How can I incorporate the conversion of raw light intensity values of the fNIRS data to the concentration changes of the chromophores using MBLL?
I am a budding researcher in the field of fNIRS and its applications specific to cognitive neuroscience.
We are in the process of designing a dual channel CW fNIRS system for signal acquisition.
Can anyone please tell me how and where to incorporate the conversion of raw light intensity time series of nirs data to the concentration changes of oxy and de-oxy Hb using Modified Beer Lambert Law?
Any help from your side would be greatly appreciated.
Thanks so much for your reply and also for the presentation.Following
- William J. Croft added an answer:7What portable neuroimaging equipment is available?
Has anyone got any reviews and practical recommendations about portable neuroimaging equipment that can be used to record during normal everyday activities?
Mitzi, hi. It sounds like your interest is primarily in consumer level EEG devices which are in general lower cost and wider user base. Checkout the OpenBCI project, being used by many in the Maker community.
As an example of a high end fNIRS system, I also include a link to a JOVE video / paper, just ran across this today synchronistically.Following
- Vy Quynh Tranvu added an answer:2I want to create a u-map attenuation correction for I-123 but I don't know how.
I am doing a research of using CT for attenuation compensation of I-123. However, my hospital doesn't have hybrid SPECT/CT, and I don't know how to create a u-map conversion from CT to I-123. Do I have to run a special program to create the map? We use Toshiba SPECT Ecam and CT is also Toshiba.
Thanks for your answer. We have OSEM and Butterworth reconstruction. Can it be used to input u-map data?Following
- Vinay Pai added an answer:4What is the best way to set up a small cluster for Neuroimaging data processing?
We are a small Neuroimaging group from Munich. At the moment we try to build up a cluster to speed up Neuroimaging analyses.
Hardware: 3xFujitsu Primergy Towers, Intel Xeon, 6 cores (18 cores in total), 2x512GB SSD (3TB SSD in total), 3TB HDD (9TB HDD in total), 96GB memory (288Gb memory in total). The clusters are connected using a professional GigaBit-Switch.
Software: Ubuntu Server (reduced to a minimum of necessary components).
Neuroimaging tools: FSL, AFNI, workbench, freesurfer, ANTs, NiPype, Matlab-based scripts
Study aim: Imaging studies with a sample size about n=120 (sMRI, DTI, fcMRI, fMRI)
We have a controversy on two different approaches:
1. running everything on the host (current status); (further integration: separation of XNAT on an external tower based on a virtualization as many dynamic changes will occur and due to safety reasons, e.g. external accounts)
2. building up a virtiual environment (virtualization of the entire cluster with the aim of an improved flexibility and security of the system; the performance loss is justifiable)
For either 1 or 2 XNAT will be integrated on another tower (Quad Core, 32GB working memory, 1TB SSD, 3TB HDD). We would like to run XNAT in a virtualisation. We plan to use XNAT as an inteface to start analysis pipelines on the cluster.
Moreover up to ten external user accounts will be created that are also applicable via the Internet. At the most we think not more than 4 user will use the cluster in parallel.
We would like to know whether someone here has already experiences and would recommend either 1 or 2 for our aims?
Any recommendations/experiences? Thanks a lot!
NITRC-CE is open source (but depends if you are running on stand-alone or running off of AWS or Azure - where obviously you would pay for using AWS/Zure but free for your standalone). Since you already have a cluster, you can just download and run on your stand-alone. It uses Nipype (which is python-based).Following
- 3Does anyone know how to merge axial and sagittal MRI to artificially increase resolution?
I wondered wheter it is possible to merge axial and sagittal T2w images of spinal cord lesions (same field of view) into one image. Does anyone know a software which has a pipeline included?
Once I did some reconstruction of the image of an aortic dissection using Osirix but never attempted to do it with the brain. This may be possible, at least in the professional version.Following
- 3Is it possible to obtain measures of total brain volume at different times during T2 functional acquisition?
The goal is to compare the total volume at the beginning and the end of the fMRI protocol . The voxel resolution we are using is 3*3 mm, which will ultimately give us a low resolution for volumetry, but still will be of interest for us.
There was some works done for doing contour fit the time I was in Erlangen. We used a 1-slice T1 volumetric scan, and stripped it of zero-pixels, resulting in the contours of the skull. I think doing it with sliced MRI data is too prone to give non-significant results even if there are significant ones.Following
- Duco Kramer added an answer:2Looking for suggestions for a retrograde tracer system (other than Fluorogold)?
I am currently looking for suggestions for a retrograde tracer that I can inject into rat brains via a stereotaxic surgery. I am aware of Fluorogold, however the secondary antibodies that Fluorochrome make would cross-react with other antibodies that I use (I am trying to avoid purchasing more antibodies).
Any suggestions for secondary antibodies to the Fluorogold beads from Fluorochrome?
SUNY at Buffalo
Have you thought about using quantum dots? Small fluorescent nanoparticles that do not quence.
- Alexandria Reynolds added an answer:8Hey I need to gain knowledge about EEG and other neuroimagining methods, could any of you recomend me good books?
Hey I need to gain knowledge about EEG and other neuroimagining methods, could any of you recomend me good books or articles?
I agree with Dr. Molfese's suggestion of Huettel's Functional Magnetic Resonance Imaging text - it is excellent:
- Phaedra Royle added an answer:6What is the better way to deal with ocular artifacts on ERPlab ?
I'm recently working with ERPlab and I'm looking for the best way to deal with ocular artifacts (I'm definitively not an expert...).
I have two EOG : one at the canthus the other below the same eye. I saw that there is several solutions through artifact detection : step-like, moving window, voltage threshold, etc...My first idea was to use step-like and/or moving window but I don't know if it's the best way ?
I also wonder if I should perform any channel operation with my EOG electrodes or not ?
Any idea/Advice ?
Dear Alexandre, as Salvatore, I would just do the analysis on artefact free trials, if you have enough statistical power. Personally, I use artefact correction procedures in EEProbe, but only with children who blink too much and for whom we would lose a lot of data. For adults, artefact rejection is not a problem. As Daniel, we try to control adult blinking. Since we work with language (often written) and images, we use an eyeblink prompting protocol (where participants are encouraged to blink between trials) to reduce artefacts during critical trials.Following
- Hasan Ayaz added an answer:1What does the value of 4000 in auto-gain adjustment in fNIR Imager & COBI Studio stand for?
Is there any specific reason why auto-gain adjustment value is arranged as 4000?
Yes, this is 4000mV at detector (due to light intensity) that goes beyond the Analog to Digital (A/D) converter limit and will results in saturation for higher values. You need to adjust LED current and brightness to lower the light intensity at the detector.Following
- Robin B Holmes added an answer:5Does anyone know of an brain atlas of arterial territories that is registered to MNI or talaraich or some common space?
I need a file that I can download use for data analysis (basically automatically determining what vascular distribution a stroke has occurred in).
This link is to a nifti file with the regions from the 1998 Tatu neurology paper. Not probabilistic but the best I could find, and I have been looking for a few weeks!Following
- Stefan Bauer added an answer:9Is anyone interested in MRI Brain Tumor Segmentation Tool?Anybody interested in a simple tool for automated brain tumor segmentation from multimodal MRI images, please check out our BraTumIA (Brain Tumor Image Analysis) software. It can be downloaded from http://www.istb.unibe.ch/content/research/medical_image_analysis/software/index_eng.html
It requires manual set up for each patient, at the moment we do not offer it as a script for running on a larger database yetFollowing
- Vladimir A. Kulchitsky added an answer:1How do I measure neurite length using Neuron J?
Can anyone please provide info about measuring neurite length using Neuron J. What should be included in the calculations? vertices? tracings? or groups?. Please also explain what are vertices? Many thanks in advance.
Dear Meha, with my pleasure. All the best. VladimirFollowing
- 8Is there any research into how often short term memory is the first sign of Alzheimers?
I have just started researching about brain imaging techniques for early Alzheimer's Disease - something I know little about at present. It is often stated that short term memory is the first area that is affected for the patient. However, is there any research into how often this is actually the case, and how often it isn't? I haven't come across any so far in the literature I read. It is just often stated that it is the first sign, but I want something more substantive than that.
Short term memory from an imaging point of view, can be assessed by monitoring changes to the hippocampus. I am therefore investigating the existing techniques, and also seeing whether or not they can be improved on in anyway using my experience of imaging for other neurological conditions.
Short-term memory is one way to say things, others would be working memory, etc. I am not aware of any good epidemiological work addressing this issue in AD. But some informations may be found in the vast literature on mild cognitive impairment, where AD seems to result mostly from amnestic MCI. The trouble is that there are most likely other symptoms such as apathy, anxiety, which may precede MCI or AD, which resembles the non-motor-signs of Parkinson's. In Neuropathology, the first area to be affected is Area 28 (entorhinal cortex). This has been described by Heiko Braak end of the 80s, and recently updated to pre symptomatic AD in a review published in Brain.Following
- Yves Matanga added an answer:6What are the main challenges in brain fMRI analysis?When we analyze the fMRI signal how do we combine it with MRI. What are the big challenges? Is it noise, big data, what is the interdisciplinary between neuroscience and machine learning?
I believe Martín Martínez Villar mentioned a very important information regarding hemodynamic response (fmRI) which I read about concerning how they correlate with brain activity.
May be you can do extra reading on it as well.
- Lasse Bang added an answer:5How to properly weight contrasts in SPM?
Hi all,Say you are modelling a task using a first-level model in SPM. The task consists of three conditions: congruent, incongruent, and control. When setting up the contrasts for this model, how would one properly weight the conditions for the following contrast: [congruent + incongruent] > control? Would this be 0.5 0.5 -1? On a related note, would the contrast [congruent + incongruent] > baseline be 0.5 0.5 0, or 1 1 0? I am confused as I've read that positive (and negative) contrasts should sum to 1, but I've seen people use contrasts such as 1 1 0 0 etc. for certain contrasts, which obviously do not sum to 1. Any input appreciated!
Thank you Niv for raising this point!
I omitted some details in the original post for simplification, as my question was a general one regarding contrast weights. Actually I am interested in between-group differences in my study (i.e. patients vs. controls). I am mainly interested in investigating between-group differences on the Incongruent > Congruent contrast.
Prior studies using a similar task paradigm have also compared groups on the [Incongruent + Congruent] > baseline, so I thought that I would do the same (to be able to compare my results with these studies). But you do raise an important point regarding the use of such contrasts; i.e. one would be unable to know if any significant between-group effects was mainly due to incongruent or congruent trials. I guess this could be clarified by extracting % signal change or raw beta weights from any sig clusters.Following
- Vladimir A. Kulchitsky added an answer:7Does anyone know of good research linking (right) angular gyrus activity with theory of mind tasks?
I am looking for neuroimaging or neurostimulation research on the right angular gyrus, especially in regards to theory of mind tasks.
Thank you, glad that it is necessary for the proposed work.Following
- Robert Turner added an answer:3Are there Publicly available MRI databases with T1, T2, T1 fat suppressed, and T2 fat suppressed with 1 mm resolution?
I have looked at datasets such as the Human Connectome Project and ADNI, which have T1 and T2 weighted MRI images but I would also like to have fat suppressed images. I am looking for resolution of at least 1.5 mm and similar distances between slices. Anyone know of a database with these specifications?
You might find the following paper useful: Tardif CL, Schäfer A, Trampel R, Villringer A, Turner R, Bazin PL. Open Science CBS Neuroimaging Repository: Sharing ultra-high-field MR images of the brain. Neuroimage. 2015 Aug 25. pii: S1053-8119(15)00761-2. doi: 10.1016/j.neuroimage.2015.08.042. [Epub ahead of print]
This provides links to down-loadable very high quality high resolution (0.5 mm isotropic) quantitative images of the parameters T1 and T2* acquired at 7T. As a basis for human brain systems neuroscience, these are the best available anywhere.Following
- Gila Behzadi added an answer:5How to identify cortical layer boundaries in live brain slice?
I am struggling to find cortical layer boundaries in mouse brain slice under differential interference contrast microscope. I typically use 400 um thick brain slice for my electrophysiology recording. Under 40x lens I kind of can see that the upper layers have smaller soma size and the soma is dense and in layer 5 the soma is bigger and sparse compared to upper layers.
The best image I can find online is this one in somatosensory cortex (http://jn.physiology.org/content/92/4/2185), in which mouse barrels in layer 4 can be easily identified.
Could anyone give me some links of good examples for identifying cortical boundaries in live brain slice?
Thank you in advance!
According to Paxinos & Watson atlas:
Barrel cortex: 2.3 mm posterior to Bregma,Lat 4 mm, V 1.5-1.8 from cortical surface (layer 4)
Motor cortex: 1.7-2 mm rostral to Bregma, Lat 1.5-1.8 mm, V 1.7 from cortical surface (layer 5)Following
- Sergiu Groppa added an answer:3What do negative t values in fMRI results indicate? What clinical significance might this info provide?
My thinking is that when we usually use fMRI to assess BOLD signal activation in response to a task to check for which regions are activated, positive BOLD signal indicates flow of blood oxygen to a particular brain region. We then proceed with the t test to compare means of activation in control and in rest and establish a threshold (say 3.40) that considers voxels greater than that active.
I have data with some small negative t values (0 to -3.40) and a few negative t-values less than -3.40 (ex: -4.00). What would such a -4.00 t-value indicate? Does it indicate decreased BOLD decrease to a particular region from control to a task? Or does it have some other significance?Following
- Muhammad Yousefnezhad added an answer:4What is different between AFNI and SPM?
I want to know what is different between AFNI and SPM? Would you please tell me the pros and cons of each of them? And describe the applications of them?
I want to say thank you in advance for your kind support.
Thanks very much for your full description.Following
- Manuel Dujovny added an answer:4Does anybody know any software which can convert pictures from MHA format to NIFTI or DCM format?
Does anybody know any software which can convert pictures from MHA format to NIFTI or DCM format?
check out this link below, it might be a help
- Andreas Haslbeck added an answer:3Any advice on dealing with eye tracking logging asynchrony?
Has anyone had problems of synchrony between an eye-tracking system and the stimulus delivery software sending log messages to the eye-tracking system logs? And if so, what the possible sources of the problem could be, how to avoid the problem, and how to deal with it?
We have an MR-compatible eye tracker from MR Technologies hooked onto Arrington Research's ViewPoint EyeTracker software on one PC. On a different but connected PC, Neurobehavioral Systems Presentation software controls stimulus delivery. I have Presentation communicate to the eye tracking software's logs when my video stimulus starts and ends because I was told it was more reliable to manually start the eye tracking system rather than trying to control it through commands triggered from Presentation. As far as I understand, the eye-tracking system logs a line of data every 33ms even when there's tracking loss. I expect that I should have the same number of eye-tracking data lines between my video log markers -- so if my videos are 33 fps, I assume I should have the same number of eye data points as frames for a given video -- is that correct?
However, the eye-tracking data corresponding to a video is on the order of up to 3 seconds (1-80 data points) longer than the video. For example, for a random video and according to the Presentation log files for some random 2 subjects:
video X: 102 frames (25fps) = 4.1 sec length of video (according to Presentation log files and video)
sub1: 182 lines eye data @ 30fps = 6 sec of data marked as recorded during the length of video
sub2: 166 lines eye data @ 30fps = 5.5 sec
I am very hesitant to assume that the "video start" log marker I had Presentation send to the eye-tracker system log files really corresponds to when the video started (and then take only as much eye data as video length, ignoring the "video end" log marker -- or could this be a safe assumption?
Thanks in advance for any help and explanations!
To add some comments to David’s answer:
- Some laptops (connected to eye-trackers) deliver different recording rates depending on their power supply: quite stable and high rate when connected to a regular power supply, but reduce their performance for very short intervals when using the battery – even when manufacturers say that this won’t happen.
- The same might be an issue concerning Ethernet communication.
- Sometimes there are background programs running und consuming the CPU’s performance (every time other programs consume too much performance, recording programs skip single frames)
- Sometimes programs do not immediately start when you start them.Following
- Md. Asadur Rahman added an answer:26Better than EEG?Does anyone know of any low-cost methods for non-invasively monitoring brain function, but with yielding more information, more robustly than EEG?
I think we should not compare EEG and fNIR. As far I know we do not study fNIR a lot. Human society has to wait some more years to understand fNIR completely.Following
- Nik Murphy added an answer:7How can I follow up interactions in EEG analysis: mixed opinion on the need for correcting multiple comparisons?
My question is about whether there is an established procedure for investigating interactions in EEG analysis.
Some papers seem to correct for multiple comparisons and others do not, so I am wondering if there is an agreed perspective as to when correction is needed. To use a relatively simple example, if I have a 3 way interaction between Factor 1 (three levels), Hemisphere (Left, Right) and Region (Anterior, Central, Posterior), I would investigate this by doing a single-comparison between the levels of Factor 1 in each combination of the Hemisphere*Region interaction.
But I have seen different approaches used to achieve this: One approach would be to take a subset of each site (i.e., Left-Anterior, Right-Central etc.) and do a one-way anova to investigate the effect of Factor 1 at each of these sites, then run (uncorrected) contrasts to investigate which levels of Factor 1 differ. The argument here is that the uncorrected follow-ups are licensed by effects/interactions in the preceding interactions.
Alternatively, a much stricter approach is to run corrected t-tests (e.g., Bonferroni) at every possible combination of Factor1*Region*Hemisphere (which by most methods in R/SPSS includes correcting for comparisons I am not interested in (e.g., Level 1-Anterior-Left vs. Level3-Posterior-Right and thus produces very strict p-values). So I am interested in what EEG experts would do and precisely when (and to which single comparisons) they would apply a correction procedure – if any?
It may seem like a simple question, but it is one for which there seems to be very mixed approaches across the many papers I have read.
Interesting. Correction for multiple comparisons is always a good idea with repeated measures contrasts.
I'm a bit confused when you mention p600 and LAN. Are you comparing these in the same anova, and if so why? But I won't delve into this without knowing fully how your comparisons are set up.
if you already know where your effect is localised to (e.g you have a strict region that shows the P600) , and above you say parietal kind of sites, then remember the physiology of EEG here. These electrodes will all likely pick up the same activity, so you could feasibly take the regional average. You'll boost your power massively by reducing the number of levels in the anova. Additionally, if you're hypothesis is to see effects on the left then maybe you could exclude the right hemisphere for now?
Or, as above you could be more exploratory and use fieldtip or the massunivariate toolbox to perform non-parametric clusterbased permutations. However, beware that this might pick up on elements of the signal that are more difficult to explain (not necessarily the peaks).
Alternatively, you could reduce things down even further. Try running ICA on the datasets and then clustering the components. This might reduce down the dataset to a level that could be compared in a much simpler model (e.g 1 component, 2 conditions).
I might have gotten a bit carried away here (or missed the point), but I agree with all of above posts - keep it simple. My input would be don't go chasing significance, and be conservative with your tests. Also, remember to compute appropriate effect sizes and confidence intervals, as these will tell you much more about your effects/interactions than a p value will - regardless of how many corrections, and levels, you have in your model.
If you're really stuck then try running all possible models and see what changes between them. You might surprise yourself.Following
- Julieta Campi added an answer:10Has anyone ever imaged calcium in brain slices (cortex or hippocampus) while perfusing glutamate?
I am trying to image Calcium with a two-photon microscope in slices from mice injected with a virus to express GCaMP6. The expression is good and I can see activity when I electrically stimulate the slice. When I perfuse Glutamate (100 uM in acsf), I see no activity at all. Does anyone know what could be the problem? I was expecting to see a lot of activity!
Thanks in advance.
Thank you so much to all of you. In the end I tried puffing a much more concentrated Glutamate solution (1 mM) and I saw a lot of activity. Luckily, that was enough for my control. Thank you very much for taking the time to try to help me. It is really appreciated.
- Ali Hasan added an answer:6How can I get free MRI brain dataset?
I am wondering about how to get Free dataset of MRI brain scans and there are many sites provide dataset but in muv format. could tell me what is this format I have not find the appropriate software for this format
Many thanks for your comments