Science topic
Brain Computer Interfaces - Science topic
Community for researchers who work with or are interested in BCIs and their applications.
Questions related to Brain Computer Interfaces
My company is looking to design a L&D program for Employee Skills Identification. We are looking to use BCI technology for the same and it would be of great help if we can find an SME with whom we can collaborate to conduct this.
What are the criteria for merging EEG datasets?
Are there certain conditions?
What are the potential standardising criteria?
The PhD scholarships In the School of CSEE, University of Essex, are open. Please see the details via https://junhuali.wixsite.com/home/prospective-students
Research Topics: Brain-Computer Interface, Deep Learning, Data Analytics
I have UNM dataset for Parkinson's disease which is publicly available in .mat format. I'm trying to work on it using MNE python and for that I need to get the spatial distribution of the electrodes to generate MNE object.
so far after using scipi.io to read the data I've got following format for each electrode
['FC5']
[]
[[-69.332]]
[[0.40823333]]
[[28.76282344]]
[[76.24736451]]
[[24.16690699]]
[[69.332]]
[[16.518]]
[[85]]
[[6]]
[]
about last two entries 85 is same in all 63 electrodes and 6 here is electrode index but what all these other numbers supposed to mean? where are the coordinates? Can someone explain?
1. Inverse EEG Problem
2. Volume Conduction Problem
3.Source Localisation Problem.
Any more research directions than these??
Reserch will include Machine learning, deep learning based custom model.
Suggetion on device with minimal cost for capturing brain signal will help additionaly.
The software could aid visualization and data capture from the EEG sensing device transferring data through Bluetooth connection.
The BCI system does not have a particular application, it's just based on working with cognitive tasks. But there's doubt in what tasks to use and why
Currently, I am a Ph.D. candidate pursuing the development of innovative projects related to Biomedical Engineering and Mechatronics fields. In the last 7 years, I was deeply involved in the theoretical and experimental research of the Brain-Computer Interface to achieve the implementation of simple to use, appealing, and cost-efficient real-life applications by using portable headsets that should be valuable for people with neuromotor disabilities. Until now, taking into account different limitations, I focused on detecting voluntary eye-blinking as controlling signals in Brain-Computer Interface applications.
I am studying Information Engineering in an applied sciences university in Hamburg, Germany. I am starting my bachelor thesis next semester but I'm lost and I can't decide for an interesting topic to me which is suitable for a bachelor project.
In general I am interested in Biomedical engineering, especially in Brain Computer Interface after I did some reading on the topic. Do you have any simple projects ideas related to this field?
Many thanks to you.
Hello,
I am currently working on a project with a colleague that uses EEG data to classify emotions using the circumplex model (i.e. valence/arousal). We plan to use the DEAP dataset for emotion calibration. However, one difference my colleague and I had was whether:
- Pre-recorded EEG data from the DEAP dataset can be directly used to train a classifier? OR
- Is it necessary to record participant's live EEG data, while being instructed to view items from the DEAP database to effectively categorize emotions with a classifier?
One issue I had is that pre-recorded EEG data from the DEAP database would not be as accurate for classification as having a group of participants view items from the DEAP database, while EEG activity is being recorded. However, my colleague suggests that recording raw EEG data from participants will be too time-consuming, and less effective. Does anyone familiar with EEG and emotion classification have any insights? Any suggestions are appreciated.
DEAP dataset
Hello,
I am currently working on a project with a colleague using a Brain-Computer Music Interface (BCMI) to generate music from EEG signals. Affective states will be recorded with EEG signals and sent to a generative music algorithm. A couple of questions I had about the design:
1). We plan to use Emotiv Epoc+ (14 channels), does anyone know any way to run raw EEG data on EEGLAB in real-time, or must EEG data be recorded with external data acquisition software, and analyzed separately?
2). To run the generative music algorithm, is it necessary to train a classifier to model emotion? Can a SVM or random-forest classifier be used to classify emotion from EEG signals, which can then be fed into the generative music algorithm? Or is this step unnecessary?
3). We plan to use the DEAP dataset for emotion calibration. However, one difference my colleague and I had was whether:
- Pre-recorded EEG data from the DEAP dataset can be sent directly into the generative music algorithm? OR
- Is it necessary for participant's EEG data to be recorded, while being instructed to view items from the DEAP database to gauge their affective brain states?
DEAP dataset
The generative algorithm is inspired by Ehrlich et al. (2019), and is designed to generate sounds reflective of the user's affective state.
For Image classification tasks there are may existing techniques to overcome class imbalance problem. Can any one please suggest the best possible way to overcome Class Imbalance in Epoched EEG data?
After designing an estimator aiming to predict attention state from EEG, it has been thought to implement it in a neurofeedback in virtual reality.
However, due to the covid related issues, it is very difficult to plan a big study with a large number of participants (as often the case in related research projects). I was wondering if it could be interesting to consider a study (or pre-study) with a small number of participants?
Hi,
I am working in brain computer Interface application. Is there any possibility to extract features through reinforcement learning. can you please guide me with some tutorials and materials.
Greetings RG community,
I have been working on a pipeline for the classification of EEG motor imagery signals. This is currently being done on the Giga Science MI dataset with 52 subjects, using all 64 EEG channels. The question can be isolated to my first preprocessing step involving ICA by way of Hyvarinen's fast fixed-point algorithm. If I develop spatial filters (vectors of the unmixing matrix) using only the intended training data, is it a violation of appropriate protocol if I then project all raw data (which includes testing data) on these vectors in an attempt at blind source separation? The nuanced thing that brings about concerns is that the raw data is provided in two matrices each containing LH+RS and RH+RS signals (LH = left hand; RH = right hand; and RS = resting state). If the spatial filters wL and wR were constructed using the LH and RH training data respectively, then the original raw data of the LH and RH matrices (including both training and testing data) were projected into these directions prior to the partitioning of trials, is this considered using knowledge of the classes thereby rendering the entire analysis ineffective? At first, I thought I was in the clear because class labels were not used as ICA is unsupervised, now I think it pertinent to ask someone that may have experience in this field.
My results were great under these conditions (perhaps this would be obvious). To check if I could replicate results a different way, I vertically concatenated the LH and RH training data (doubling the number of samples in comparison to the conditions described above) and developed a single matrix of spatial filters then projected all original data onto these but the results were poor, indicating a large drop in spatial resolution. Ideally, I would like to develop a single set of spatial filters that can be applied to all data indiscriminately, if anyone has any advice given the situation it would be greatly appreciated. Since this step is being done prior to the partitioning of MI trials, I have considered performing ICA on some vertically concatenated trials after partitioning and was wondering if this would yield good results as I have also read that the resting state signals contain important information for the minimization of mutual information (maximization of differential entropy), so I am uncertain with this approach. I am also in the process of replacing FastICA with SOBI, JADE, and infomax in an attempt to gain higher spatial resolution. Please excuse any off-putting terminology as I recently pivoted from functional protein dynamics recognition and prediction to BCI-EEG motor imagery classification. Feel free to share any thoughts, all advice is welcomed.
Thank you for your time,
Tyler J Grear
I am working on my master thesis of ethics of BCI.
Anyone in this field that could help me with the topic is welcome.
I find the biggest change that has happened during this pandemic to me the intense merging of humans and computers.
I am currently trying to figure out how to extract raw data from the Emotiv Epoc+ device. Any help regarding that would be appreciated.
I am a research scholar working in brain computer interface for DOC patients. I am looking for EEG data set specifically of the patients with consciousness disorder. Could you suggest any references?
In much of the tDCS literature I have reviewed so far, the position of M1 for anodal tDCS is given as coincident with C3/C4. Likewise, the positon of primary somatosensory cortex S1 for cathodal tDCS is given as 2 cm posterior or occipital to C3/C4. But now I am reading "the course of the central sulcus (rolandic fissure) which separates the frontal lobe from the parietal lobe corresponds to thin lines touching CPz-C2-C4 and CPz-C1-C3, respectively, [& actually courses through the centers of C4 & C3, respectively.] The two gyri immediately neighboring the central sulcus are the primary motor cortex (in frontal direction), and primary sensory cortex (in occipital direction)."
If it is true that it is the central sulcus itself that is coincident with the C3/C4 positions and that primary sensory cortex is estimated at approx. 2 cm occipital/posterior, then why is primary motor cortex not estimated as 2 cm frontal / anterior? I have not seen this discussed anywhere in the literature I've reviewed so far.
I am also trying to match up the M1 & S1 homoncular maps with their approximately corresponding electrode positions, understanding that only one electrode position each intended to stimulate all of M1 or S1 is much too coarse for the application we have in mind. Does anyone have a reference they would be willing to share which ideally would match up the 10-20 electrode positions in the vicinity of C3/C4 with their approximately corresponding somatosensory & somatomotor functional homunculi with higher resolution & greater specificity?
Quite a few published researchers are using Amrex branded sponge electrodes with banana jack connections for 1x1 low resolution tDCS. We have attempted to use them & have encountered several problems including one serious safety problem.
The electrode in question is a 3" by 3" square non-conductive rubber frame containing conductive wire mesh overlaid with a removable coarse kitchen-type sponge that protrudes out of a 2" by 2" aperture when soaked in saline. The rubber frame is stiff & does not conform well to the curvature of the cranium, especially with smaller subjects. This in turn results in difficulty placing it accurately & reproducibly & also in making good & uniform electrical contact. Though the maximum contact area of the sponge on the scalp is ideally 4 in² (25 cm²), in practice it is considerably less & variable with only a central area of contact which can be approximated as a circular disc inscribed within the 2" by 2" square aperture. This leads in turn to the most serious problem:
Injected current levels up to 2.0 mA are routine in tDCS research. The research community generally accepts a current density limit of .08 mA/cm² for the safety of the subject's skin in contact with the electrode & also to minimize potential damage to the underlying brain tissue. Even if the 2" by 2" sponge made perfect contact with the skin, at the 2.0 mA injected current level the current density limit is reached, exactly, as bulk current density = current / cross-sectional contact area = 2.0 mA / 25 cm² = .08 mA/cm². But these electrodes do not make perfect contact even when the they are secured tightly because of the rigid frames enclosing the sponges. So the contact area is rather less, resulting in the denominator being smaller and the current density necessarily exceeding the safety limit. Even at somewhat lower levels of injected current, taking the variable contact area of the sponges into account, the current density could easily exceed the safety limit. Furthermore, this is a very coarse bulk analysis. Taking nonhomogeneity, edge & corner effects into account, local areas of unacceptably high current density are unavoidable & can be demonstrated convincibly with a more sophisticated analysis (one using finite element methods for example).
Yet another practical problem with these electrodes is they have a strong & pungent odor which research subjects find objectionable, penetrates their hair & endures on the electrodes even after successive washings. If one electrode is placed supraorbitally, as is a common position in tDCS, the obnoxious smell in close proximity to the subject's nose even has the potential to affect the outcome of the experiment because it induces stress & stress-related neurological activity that has the potential to confound results.
Dear All,
I want a standard data set of EEG signals for the intent of movements. I want the standard data sets for left, right, front, back, start, and stop movements of alpha, beta, and gamma signals. Please let me know, where can I find the standard data set.
I'm trying to design an SSVEP scenario and wondering if there are any ways to cross-check if the stimulus frequency is displaying accurately on the LCD screen.
Hello. I am thinking of using tDCS to stimulate the motor cortex C3/C4 or the SMA Cz to study possible effects on motor imagery. But I am still undecided which area would be better.
Thank you for any advice.
How to measure the cognitive load from the brain activation.?
i use to use ERD/ERS% formula to get an indications of any increase of decease of the overall cognitive load, but i would like to know if there is any other way to measure different types of cognitive load (extraneous, intrinsic, germane).?
Has anyone used a cheap commercially available brain computer interface, such as "NeuroSky MindWave", or "Muse: The Brain Sensing Headband" in the learning process, in gamification? Are works on this topic known?
Why EEG trials recoded for same stimulus is different? For example, the same visual stimulus like 12hz ssvep produces comparable but unique timeseries.
Other than noise, under ideal conditions, should the eeg trial be same?
gtec brain cap + Gammabox + Nihon kodhen input box + Nihon kodhen amplifier.
If you have experience on this combination please give your comments. (Functionality, Operation, Limitations and Advantages)
Are there any freely available Local Field Potential (LFP) datasets available, especially recorded during motor actions?
Nick Lavars, 2017, Rise of the mind-reading machines [online]. Available at: http://newatlas.com/mind-reading-machines-musk-future/48642/. Accessed 05.30.2017
As I moved thru this article, I learnt that this brain-human interface is getting closer to us as a possibility, and the dawn may come when you rise up and machines will reading your minds.
True that it would help people with stroke and we may be able to communicate with them to learn their needs BUT given in the wrong hands it may be a problem as your very personal data will reachable. I am not sure how this will translate in future. For good sometimes but bad may be in the end.
Kindly enlighten me with your views

I wish to purchase a Emotiv Headband for my research purpose. Can anyone suggest me with institutes utilising the same or any other similar setup in India.
In today’s world using technology, we can get signals of the brain. the question is can we process these signals and obtain meaningful content? (for example, when the target brain is thinking to a number we show that number in a computer)
Use of electroencephalogram in the field of Brain computer interface (BCI) has acquired relevance with varied application in medicine, psychology and neuroscience, psychiatric studies to understand the brain state for predicting various brain disorders in clinical settings.
In 2010, during the Jule Vernes Corner (https://www.itu.int/ITU-T/uni/kaleidoscope/2010/julesverne.html ) my colleague panelist from Japan presented his research in Computer Human Interfaces (BCIs) or Brain-Machine Interfaces (BMIs). Since then the neurotechnology made signifact progress and there are number of implentations.
i intend to work on Brain computer interface using EEG headset by emotiv but i am confused whether i should buy their sdk or not?
I'm a master student of Computer Engineering. I'm interested in BCI (Brain Computer Interface) and I want to write a thesis on this area, but I couldn't come up with a satisfying subject! I'd be glad to hear your suggestions.
Hi,
Has anyone got any reviews on gtec's g.nautilus dry electrode based system, g.USBamp over the g.MOBIlab+ ? I am currently using a 8-channel g.MOBIlab+ with active electrodes and am thinking of upgrading to a 16/32 channel system. I will be using the system for a BCI application.
Thanks.
Researchers use machine learning classifiers to predict what brain activity means. My question is" why not just doing "signal processing" to the fMRI data to convert it into 0s and 1s , so we form what the subjected imagined easily and shortly? I just dont get this point
In ERP/P300 signal analysis, xDAWN is well-known to find the spatial filter.
I have read several reference papers about xDAWN, such as
xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain–Computer Interface
A Tutorial on EEG Signal Processing Techniques for Mental State Recognition in Brain-Computer Interfaces
But I still do not know very well about xDAWN. So far, I know that the first column of D is 0 except the positions of stimuli onset, but how about the other columns? or we do not need to know the others then we can create the Toeplitz matrix?
Would you please give me an example? Where can I find the source code of xDAWN to let me study more about it?
Can anyone provide me Arduino and Open BCI code for connecting two external button on 8 channel Open Brain computer interface system ( ADS1299). Please ?
I'm looking for an off-the-shelf device that we can extract data from, in an out-of-lab environment.
Hi , i am currently working on motor imagery BCI (Brain Computer Interface). I am completely new to this field. I had downloaded data from BCI competition IV (data set -2-b, Left hand and right hand class ). I extracted alpha(8-12)Hz and beta( 14-30) Hz signal using band pass filter for C3 and C4 electrods for different trials , then i calculated average power for all trials. But i dont know how to calculate Event related desynchronization/synchronization ( ERD/ERS) in MATLAB. i dont know ho to calculate baseline power and change in relative power of %ERD/ERS. Can any body tell me how to do in MATLAB, please ?
I am working with the NeuroSky headset and while there are many advantages to using it such as being cheap, portability etc, I would like to know what common problems or limitations other researchers have experienced while using the kit.
We are currently working on a project which requires to identify a target response in EEG, while the subject is viewing a video stream. During the experiment, the subjects are asked to watch a live feed of video, if a target, let’s say, a target car which has been shown to the subject previously, entered into the screen. Then we try to identify the target induced EEG changes.
As far as the target response is of concern, the RSVP or “oddball” paradigm is usually employed in the existing literature. Did anyone have any experience of using video as stimulus to identify the target response?
I have an EEG data set which is about 5 minutes long for each subject. I want to detect and correct existing artifacts using ICA approach. I can apply this method on the whole data of each person or first epoch this data and then apply ICA on each epoch separately. Which one is more accurate?
Is it possible to extract the data (neural information) from brain by using Radio waves ?
1. Is the any tech. for Brain computer interface using Radio wave technology.
2. In other words, can we do something by using radio waves to extract data from neurons or knowing about neuron activities.
Technology that's use Radio wave, can help for Brain computer interface?
Note: if it's possible please suggest me more scientific Article regarding it.
I would like to create realistic visualizations of neural implants (see image attached for a science fiction version) and need some details regarding the electrodes used. Diameter of electrodes is comparable to diameter of large neurons, am I right? Are alway single neurons targeted? Where are the electrodes attached (cell body, axon)? Maybe sometimes groups of neurons are sufficient? And what about measuring in close to vicinity of neurons? Referring to neuronal tissue, what is the percentage of neurons targeted?...

I want to know how the blood perfusion values spreads in brain tumors, is that there is a blood flow distribution patterns in some types of brain tumors (for example the value in the contours is higher than the center of the tumor)?
I have performed CSP on EEG data and ended up with a complex projection matrix . Is it normal ? or Am I doing wrong processing ?
I'm currently thinking of buying the V-probe, but heard a rumor that they are not as reliable as the U-probe. Any comments or experiences with either of these?
Any possible reasons, explanations will be great!
Hi! I'm looking for books or journals related to the use of brain-computer interfaces in the accessibility area.
NIRS to detect sleep patterns.
I am currently trying to employ measures of fractal dimension to analyse EEG data. In particular, the methods proposed by Higuchi, Maragos & Sun, Petrosian, as well as the box-counting dimension.
In one way or another, each method calculates the fractal dimension from the slope of a double logarithmic plot, where the abscissa is given as Ln(1/k) - where k represents a scaling parameter.
However, there appears to be no guidance provided in the literature that suggests which value of k, or how many k points should be used to accurately calculate the FD.
I appreciate that the choice of the scaling parameter k is specific to each fractal dimension.
Any advice on how to define the scaling parameter region, either on a global level or for one of the individual fractal measures mention above, would be greatly appreciated.
Thank you in advance,
Matt.
Is there any intracranial EEG system available, which is fully implantable - i.e. like the NeuroPace system, but without the responsive stimulation part?
If you are aware of such a product, or of research groups doing advanced research on such a device, I would strongly appreciate your information.
best regards
KS
In the complex system of the executive functions the basal ganglia are of great significance. These control cognitive activities such as spatial memories, the execution of motor actions in a specific context and motivational elements of learning. The cortex and the basal ganglia are closely linked and control, also through the cerebellum, the motivational aspects of a movement (the preparation for the action), the contextual aspects (the execution of the movement) and its state of execution. Now, in what way does this complex system generate rapid and unexpected actions?
Dear group, we're trying to connect 2X32 ch. EEG caps on 1 64ch. device. Does anyone have the experience in what way the ground electrodes should be placed? We can use NuAmps, Deymed or SynAmps EEG devices.
Hi,
I want to analyze a size volume of intense spots in a MRI data file in FIJI (ImageJ)
I firstly project this file to 3D format using the Stacks -> 3D project tool.
Using the point tool, I can select which intensity should be used for calculating (for which I now only know 3D object counter, but this is a bit messy in my view, options? :) ).
Then using 3D object counter it calculates the amount of available spots.
But now I want only to use part of the 3D scan (which I compiled first) to be able to analyze only the intense spots in the frontal lobe and prefrontal cortex and in the hippocampus.
Is there an easy way to calculate this? (total volume of hyperintense spots in frontal lobe and hippocampus from MRI (.nii) data)
Thanks in advance,
A
Hi,Hope you are fine. I am a student of Master's in Mechatronics Engineering (Air University "Islamabad Pakistan") and currently doing my research in fNIRS Based BCI (Brain Computer Interface) system. I have gone through Journal Paper “ A regularized discriminative framework for EEG analysis with application to brain–computer interface, Volume 49, Issue 1, pp 415-432,January 2010. In it a new algorithm that unifies feature extraction,feature selection,feature combination and classification is presented. Its very informative and i am trying to get through it but facing difficulty in understanding the algorithm.I only have raw data (which i got from brain). I am a bit confuse how above mentioned features can be unified. It's my humble request that kindly provide the regularized discriminative framework code and helping material which can be used for selection so that I could able to continue my research. I shall acknowledge your work properly and cite by my research papers .Thanks
I used labchart program v.5 for recording EEG signals in anethetized rats. Using lab chart reader v.8 i drew parameters such as maximum power, amplitude, duration, etc. I woul like o know for Quantitative EEG analysis which parameters are best o use?
How to find out ERP component in Prospective memory task after stimulus presentation and which ERP component (Peak) is represent it?
References:
*Kalen et al. 'Age-related changes in the lipid composition...' Lipids 1989; 24: 579-85.
*Alehagen and Aaseth.'Selenium and Q10 interrelationship...' J Trace Elem Med Biol 2015; 31:157-62.
I've been doing some research with Emotiv Epoc, but it has a lot of disadvantages, for example it is almost impossible to evaluate female participants with long hair and small heads - the device does not get any signal from them. Furthermore Emotiv sometimes looses signal from some single electrodes and the quality of the data that it provides is not so great.
Do you know any other low-cost EEG devices that you would recommend for studies regarding emotions?
Hi,
Is there a publication on dataset 1 describing the class label (original class label) for testing data of BCI competition 3?
Thanks.
I am doing some literature review on non parametric regression techniques.
I would like to ask those familiar with the topic if you may know the disadvantages and advantages of ANNs compared to other non parametric regression techniques like :
- MARS (Multiple Adaptive Regression Spline)
- Projection Pursuit Regression
- Gaussion Process Models (?)
- Additive Models
Is There Anyone who has a comparative literature on it?
Your Contribution will be of great help.
I am working on Neurosky EEG heatset. I can get wave parameters like alpha beta gamma attention meditation levels. as per my survey , researchers work on Brain computer intration (BCI) with eeg signal. I need to know the impact on brain waves in real time wireless sensor network?.
What the methods or transforms (other than CCA )can be used for SSVEP signal detection?
I am using SSVEP signal for BCI speller.
So far I found the below equation to measure concentration level but the reference is not that solid and I can not relay on. Any one has other equation or solid reference that support this equation will be highly appreciated.
concentration level = ( (SMR + Beta) / Theta)
where SMR = SensoriMotor Rhythm
Hello,
I have two volumes: one structural MRI, and a functional NIFtii atlas. And I need to coregister these two volumes.
What can be the suitable software to do this task (slicer, freesurfer)? Slicer is a friendly software with a GUI, but I don't know how much the result is accurate.
What are your suggestions?
I would like to implement the approximated surface Laplacian (SL) estimated using Hjorth Algorithm but I can't find an open literature on the approximation of The SL at scalp edges as suggested in "Spatial Filter Selection for EEG Based Communication" by McFarland et al.
They mention Zhou's work for edge electrodes SL approximation but I do not have access to that literature.
Does anyone have relevant literature on that?
Or Does anyone Know How To Compute The Large Laplacian also called" next nearest neighbor SL" (up to the edge of the scalp)?
I am working on Brain Computer Interface and would like to ask the scientific community if there is an existing mathematical model that can simulate neural adatation of the brain more importantly for application in Sensorimotor rythms based Brain Computer Interface.
Your Input will be of great help.
Thanks.
In EEG classification systems (for BCI), consider two scenarios where
1) Acc = 90%; T = 3 sec; (time window)
2) Acc = 70%; T = 1sec;
ITR for condition 2 is higher than condition 1 even though there is significant decrease in accuracy.
So, is ITR always a better option (than accuracy) to consider the efficiency of model ?
I am struggulling with OpenVine for 3 weeks. I will use P300 magic for visual event related potantial BCI application. Images and backround can be changed easily for request.
1. a. In acquistion.xml data recorded in signal folder. Here subject ID must be exist in data file name. How can I add ID to filename?
b. Acquistion.xml trial and session number in gui are not same as application.
c. Why all images do not stimuli in aquiation.xml ? Only some of them flashes. In acquistion, I think data from all images must be recorded.
d. I tried to convert .ov file in to .csv or .edf as follows, generic file reader and csv/edf writer but it failed.
2. a. In train-classifier.xml file, defaultly generic stream reader reads signal/bcı-p300-signal.ov. Why only one file is used? For example, how can I data from 18 subjects?
b. How can MATLAB classifer be applied in this part?
Please help me.
Looking for a simple technique/algorithm for artifact correction(not rejection) for analyzing oscillatory processes in EEG signals.