Questions related to Event-Related Potentials
We have been dealing with digitization as a 'new concept' for more than 10 years now. Previously, it was referred to as automation and ERP/MES/CRM. Nowadays, the latest systems are encompassed under the term digitization. It seems to be a fundamental novelty. For years, the terms Data Mining, Process Mining, Artificial Intelligence (AI) and KI have also been part of the discussion. Suddenly, the term AI appears to overshadow everything. How do you see the connections between automation, ERP/MES/CRM, digitization, Data/Process Mining, and KI?
Is the difference in variations of specific ERP wavelengths of craniosynostosis patients with normally developed infants would notify a pathology or an expert with clinical experience is required to perform a more accurate comparison?
Dear members, In order to obtain my second Master's degree, I would like you to help me complete this survey based on ERP systems. 🆕️ERP is a set of software used in a company to increase productivity and maximise profits 📈 through the use of new technologies. These include SAP, M365 Dynamic, Salesforce, odoo, Alitor, etc. 🔄The competitiveness of a business today is essentially based on its technological tools and innovations, and the more innovative a company is, the more it dominates its market. After having already completed a previous 👨🏾🎓Master's degree based on innovation 💡 Technology as a competitive advantage for businesses..., I'd like to obtain another VERY WELL with your help through this link: https://forms.office.com/pages/responsepage.aspx?id=qbEun-fRI0271nxU_KSKMA-Rq1jcm75FoTnNjBqdHuhUNjMyQVpKRUpYNTM3MEE4V0cwODhFWkFZSS4u Kindly appreciate. Regards Armel S. #mba #innovation #business #productivity #salesforce #sap #technology #connections #help
I am analyzing the Stroop task data. In this task, I am plotting the ERP for both condition Congruent and Incongruent conditions. I have to plot the ERP for all channels with an average. But I don't know, How to average all channels. I got the ERP plot with all channel traces (The picture is attached).
Kindly Tell me how to average all the channels in EEGLAB.
Thanks & Regards
I have calculated the ERPs based on the change in EEGs given to various stimuli. To the studies in the literature, the amplitude of ERPs is around 20-30 but the amplitude of ERPs I calculated are lower. For example, the amplitude of ERP, I calculated for an EEG data ranging from -25.30, is around -10.10. Does this indicate there is an error in my calculation or not? Is there any range that I can say should be the amplitude of ERPs?
I have a cluster and I am running statistics on component ERPs. I have realized that 2 out of 10 times when I am calculating the statistics I got one extra area of significance. Is that normal? is it because of random permutation? or I shouldn't get that?
Thank you in advance.
I want to compute the average and then difference of the ERP components using the pop_comperp() function in EEGLAB but I realized that I have some issues with the signs. The ERP data for some components are flipped and therefore the average is not correct. Below you can see my output plot.
Do you know how can I solve this problem?
I am working on EEG self-detection of Parkinson's disease and I would need to apply my methods on data from people with Parkinson's disease subjected to ERPs or ssEVPs with a control group of healthy people subjected to the same stimulation under the same conditions.
I have been searching for a year without being able to find a database. So I am looking for a direct link to a laboratory or researcher who can provide me with this data.
As many of you are aware, Industry 4.0 and smart manufacturing is widely gaining traction among many manufacturing practitioners.
One of the aims of smart manufacturing is to bring in more real-time intelligence to shop-floor and manufacturing planners, by employing different shop-floor data sources (For eg, smart products, advanced equipment, and machine sensors, collated through the Industrial Internet of things, RFID gateways, etc). The data collected from these shop-floor data sources can be processed by means of Artificial intelligence-based algorithms ( machine learning, evolutionary algorithms, etc) to produce optimized production schedules, process plans, service schedules, and maintenance plans. The application of such techniques has been researched by several researchers and numerous publications are already available.
However, at the same time, central to the functioning of any manufacturing industry are Enterprise resources planning packages, encompassing all the functions of a manufacturing business, ranging from procurement to production planning and control to service management and even auxiliary support functions such as accounting and packages.
As a result, there are two different software systems that can benefit manufacturing firms:
1. AI-based smart manufacturing tools which seem promising in improving production efficiency: Such packages are more manufacturing operation centric
2. Tried and tested ERP work packages, developed by numerous software firms: Such packages can focus on both the business planning and manufacturing operations management.
The questions that I often wonder about in this regard are :
- Have AI- algorithm-based smart manufacturing tools been integrated into existing ERP software?
- What is the level of maturity of ERP work packages with respect to AI-based intelligence algorithms ( particularly with respect to integration and hosting of AI-based work packages, API access to databases, software architecture)?
Many ERP work packages support multiple scheduling rules for a host of production scenarios such as flow manufacturing, make to stock, make to order, job shop- etc.
- But do they actually employ AI Solvers, such as genetic algorithms, search-based algorithms, Neural network models, reinforcement learning-based algorithms?
From my initial assessment of the market, I understand that we are staring at a situation where the development in these two work packages progressing is in two nonintersecting planes, with two separate software packages being the only way out- one for AI-enabled manufacturing execution and the other for ERP systems?
In that case, I'd be keen to know the market feedback-
- Are industrial engineers of today willing to accept the need for two separate software packages for manufacturing scheduling and control, considering that liaising with software systems is not their core job ?
I've found a few articles on Intelligent ERP systems, but they are pretty generic, and mainly focus on the need for ERP systems to integrate cloud and mobile-based support and automated inspections, but do not discuss much on AI solvers.
A search on google scholar does not reveal much either, with many articles proposing architecture for ERP of the future, integration with smart agents, etc, without discussing a lot on existing maturity levels and capabilities.
Looking forward to your valuable answers!
If any one can help me to Prepare " User requirement Specification " For Production planing and scheduling in ERP system
The same-different task requires to subjects to indicate whether a pair of stimuli seen or heard are the same (say AA or BB) or different (say AB or BA). Researchers often collect offline measures (e.g. response accuracy and latency) in the task.
Is there a way that I can collect online measures using eye-tracking, ERP or some other experimental techniques in psychology? In other words, instead of people reporting whether the pair of stimuli are different, I hope to infer their knowledge based on their fixations and brain potentials. Please recommend papers that I can read (if any). Thank you!
One main reason for failure of most ERP implementations is the misfit between business requirements and ERP system business process be reduced. How can the misfit between business requirents and ERP system business process be reduced?
One main reason for failure of most ERP implementations is the misfit between business requirements and ERP system business process be reduced. How can the misfit between business requirents and ERP system business process be reduced?
I need to perform reliability analysis on my ERP data. Specifically, I would like to estimate internal consistency reliability through Spearman-Brown corrected split-half reliability. Could anybody help me with this? Do I need to use all the trials for each participant?
I'm not sure how to start the analysis, using trials or averages.....
I hope to get some answer here.
Thanks in advance.
I recently tried to optimize my scripts to preprocess EEG data using EEGLAB2019. I found that including EOG channels in ICA significantly improved the subsequent ERP results, i.e., the classic LPP appeared. However, if I included EOG channels in ICA, the subsequent average re-reference had to also include them. If I performed average re-reference on only brain channels, the eeg_checkset would report an error ( ICA Index exceeds matrix dimensions). If I excluded EOG channels in ICA, average re-reference on only brain channels worked well, which, however, generated a bad ERP wave.
The figure attached is the ERP waves based on four-subject preprocessed data. The only difference between them is whether the EOG channels were included in ICA. ICALabel was used to automatically remove the EOG components.
I'm conducting a market audit regarding ERP solutions. I'm looking for quality research work related to PESTEL analysis. I'm glad if you can recommend me some high-quality research work. Thank you in advance.
We have a setup of iMotion and Emotiv EPOC that a previous faculty was using for attentional and emotion recognition purposes. As I understand iMotion gets the Emotiv data and timestamps it, however I want to conduct some research about ERP and BCI (For e.g. P300) where syncrhonisation is critical, it is possible to accomplish this using iMotions and Emotiv EPOC+? I would greatly appreciate your help.
I am writing a project proposal for my PhD. The main theme is around Event Related Potentials (ERPs) in Depressed patients. I need expert help regarding this.
Do you know papers that discuss this subject?
We are looking for objective methods to decide about inclusion/exclusion of data sets.
We are doing a study that looks at the detection rate of an effect (N400) on a single subject basis.
We have some very noisy data sets and are wondering where to draw the line for inclusion.
What criteria/methods do you suggest?
- Sufficient number of trials left per condition after preprocessing
- Sufficient SNR
We are looking forward to your replies!
I recorded tibial nerve simulations with EEG on two subjects. One subject shows reasonable ERPs as described by literature with a positive peak at ~39ms, the other subject shows the ERPs, but they are inverted ( same processing steps were taken for both datasets). Does this make sense physiologically?
I'm writing a protocol (using the PRISMA checklist) for a systematic review on evoked responses (ERPs and ERFs) in typically developing children (aged 0-17).
Can anyone recommend an appropriate risk of bias assessment to use?
The ones I've come across (e.g., Newcastle-Ottowa, Cochrane) don't seem appropriate for the study designs I'm interested in (single group, NOT intervention etc.).
Technology is played important role in the disruptive era, how to use it to support accounting process such as ERP or inventory counting. Moreover, if it could support, h ow to measure it.
i have a question about my master thesis. I am doing a thesis about success and risk factors in offshore software development within ERP Projects. Different countries have had numerous successful ERP implementations but my company does not have success. i am having interviews with IT experts in different countries that have been collaborating with the offshore software development centers.
Is it logic that I am doing a case study about the offshore software center and not about different ERP Projects, because i am figuring out different experiences, success and risk factors about the overall experience with the offshore centers and not just within 1 project.
I am looking for a ERP platform (with case study/tutorial materials and data sets) that could replace SAP ERP GBI software, to help teach my students the practical elements of Enterprise Systems. Does anyone have any suggestions?
Hi, has anyone good experience with dry EEG electrodes, e.g. from ANT, for EVP/EEG recordings?
I have read that impedance is much higher compared to wet electrodes.
Thanks - Johannes
I have a query about ERPs and neural oscillations. Is there an interaction between ERPs and neural oscillations? I am researching motor cognition, and I know the mu rhythm involves motor cognition. I am wondering whether the mu rhythm influences the motor ERPs. Thanks in advance.
What areas of the brain can be measured using EEG? What functions are appropriate and not appropriate to study with this method? Can the EEG record accurately activity in deeper brain structures and cortical areas along the midline (MPFC, Precuneus, ACC, subcortical structures)? What are the main advantages and disadvantages of this method?
I'm new to the field of EEG/ERP studies
For statistical analysis across different components, e.g. N100, I computed average for all the bins (exp conditions) for every individual using EEGLAB.
Then grand averaging was performed for all the ERP sets.
But I'm getting different values after grand averaging and computing mean across ERP sets while plotting ERP waveforms. For N100 the average mean values are different in waveform plot.
I want to compare ERP data from a single group of participants between 2 measurement times. I also want to compare the data from this group of participants with a group of control participants. I have used the jackknifing method to get cleaner averages. Sample size is 9 in both groups. I know that usually, t-tests are performed on ERP data even when the sample size is small. But I have been told by some people evaluating my last seminar that it was absolutely not appropriate for me to share (in my thesis) the results of parametric tests considering my sample sizes. I don't necessarily agree with them but I don't have much choice. So my questions are :
1) is it possible to perform non-parametric statistical analysis on jackknifed data? These analysis would mainly be non-parametric equivalents to t-tests (Wilcoxon and Mann-Whitney).
2) if it's possible, given that t values have to be adjusted before looking for the p values, how can I adjust the values I obtain in the non-parametric tests ?
3) if it's not possible, can you detail the reasons why it should not be done?
I'm using az 4620 as my PR.
Usually, it is cured at 110 degree celsius under heating for 80 seconds.
I know that, but I have to heat it at 300 degree celsius for 25 hours.
I can't erase PR with acetone and EPR.
I couldn't be more grateful if you let me know what to do to erase my over-cured PR.
Plz... I'm dying to do this experiment...
Carter J. Funkhouser(2019) report that RewP is also referred to as the feedback negativity (FN) or feedback-related negativity (FRN).However, Michael P. Berry(2019) report that the RewP and FN are ERP components elicited by feed-back indicating rewards versus nonrewards, respectively. I wonder if FN and RewP are the same ERP component or two different ERP components ?
Berry, M. P., Tanovic, E., Joormann, J., & Sanislow, C. A. (2019). Relation of depression symptoms to sustained reward and loss sensitivity. Psychophysiology, 56(7). https://doi.org/10.1111/psyp.13364
Funkhouser, C. J., Auerbach, R. P., Kujawa, A., Morelli, S. A., Phan, K. L., & Shankman, S. A. (2019). Social Feedback Valence Differentially Modulates the Reward Positivity, P300, and Late Positive Potential. Journal of Psychophysiology. https://doi.org/10.1027/0269-8803/a000253
We are a master's students in International Management at Ca' Foscari University of Venice. We are working on a project about ERP in the fashion industry and we will be glad if you can contribute to our research project. Thank you from Allegra, Carlotta, Giorgia, Ilaria, Luca and Stefano.
I have a hard time finding any literature on the Pc (positive correct) ERP, which is sometimes seen for correct response-locked trials. Much more is known about Pe (positive error, or error positivity) for incorrect responses. Any suggestions are welcome!
I have collected 2 channels of EEG data with subjects looking at some visual stimulus. I have looked into a bunch of websites regarding EEG processing steps once we acquire the signal. But I have not found any detailed explanation. I want to know these things:-
1. What kind of output file do we obtain after collecting EEG data ? Is it just the value of signal voltages? Does the output depend on the type of devices being used ? For example in my experiment, I have only the voltages value of the 2 channels as the output.
2. I am using ERPLAB and trying to analyse the signals as per the tutorial. There is some information regarding event codes. In my experiment, the visual stimulus is simply an object moving from left to right on the screen. How do I add event codes for this ?
I am trying to estimate measurement time-window(TW) for the spatio-temporal ERP. Briefly, I have two series of the time windows for 20 subjects (i.e. first series are is estimated TWs and second series is actual TWs ). I want to know whether these two series of TWs are similar or not? (I am looking for the not significant difference which is my target)
Currently, the estimated series of TWs are close to the actual TWs (i.e. for example in start point (time sample for the start) the standard deviation error, SD= 2ms, max pair difference =4ms min pair difference=0ms). However, trying repeated ANOVA, T-test and also nonparametric tests such as; U-test methods indicates a significant difference between the two series. Does anyone know other tests to show those two series of results are similar or not?
Sampling rate=428Hz, epoche (-100 to 600ms)
I am searching for a toolbox that can analyze changes in different physiological states using simultaneously recorded signals with trigger markers including EEG, Heart rate variations, galvanic skin response, and respiration rate.
my question is about the polarity of an ERP comportment:
Should the peak of an ERP have a positive amplitude and trough a negative value? let say, would it be acceptable to have N100 amplitude equal to 5 micro-volt and P200 equal to -2 micro-volt
For the ERP data that i am currently analyzing, i detect clear peak and through in waveform. But, about half of subjects show negative amplitude at peak or positive amplitude value at trough.
Your opinion is highly appreciated.
Can somebody teach how to read peaks in ERP in order to observe peak amplitude and latency measurments in ERPLAB.
I am unable to understand why in this figure No positive peak(P100) was found in the latency range of [75 150ms]?
I will be thankful to you if you can help me in learning or reading/identifying peaks in the prescribed range.
I am using a 24-channel system with 19 active electrodes to record the brain activity during flanker task. I run ICA to remove ocular artifacts. Although ICA can identify 1 or 2 eye-blink components, time course and ERP image show there still task-related brain signal in these eye-blink components. My question, is there any tips that can help to keep the real brain signal?
This chapter tells us how the ERP software works in relation to the basic business processes and functions and how it has been derived from traditional functions and processes that consumed a lot of time and the work was done manually and how it has changed from being a risky/ineffective mode of running a business to what it is today.
Material master is considered the core functionality for any ERP system used in distribution or manufacturing type functions. The integration of all material data in a single materials database eliminates the problem of data redundancy. This permits the data to be used by various departments such as: => Accounting => Materials Planning and Control => Purchasing
The evolution of ERP can be traced back to the era of 1960's when most of the software included inventory control features. Later on, in 1970's MRP (Material Requirement Planning) was added and further features such as Sales Planning, Cost Order Processing and Rough Cut Capacity Planning (also known as Closed-Loop MRP) were included. To further improve the software, the departments started to integrate and with new financial accounting systems in 1980's, MRPII was introduced. The new software could forecast the requirements for material and capacity planning and convert the information into financial requirements. It was in 1990 that all different Units of an organization integrated into one software such as supply chain accounts finance human resource etc. And the era of modern ERP system started.
I have a dataset of an EEG recording referring to a task characterized by several "rest periods", during which the recording is not interrupted. For this reason, I was thinking that removing all these periods by epoching the recording and time-locking it to my ERPs of interest would facilitate the ICA by isolating only the artifacts occurring during my time-window of interest. However, I was warned to be careful with this operation since it can cause artifacts occurring along different epochs to overlap with each other. Any suggestion about the feasibility of this approach would be greatly appreciated. Thanks
I have processed EEG data with EEGLAB and ERP data with ERPLAB. Now I would like to estimate the source of the following ERPs: P1, N170/VPP, EPN, and LPP. After a lot of reading I've narrowed it down to source estimation techniques available in Cartool and Brainstorm. But I can't make up my mind. Which software do you recommend for ERP source estimation of these two? Where can I find a good tutorial? Brainstorm provides a nice tutorial, but couldn't find one for Cartool.
Thanks in advance!
Hi all. I need some advice with creating eventlist in ERPlab.
My study comprised 5 different type of blocks: 1. audio, 2. visual, 3. audiovisual-ordinary, 4. audiovisual-short, 5. audiovisual-long. Each of these blocks contained a standard stim with a fixed duration, and a deviant stim with six different duration (e.g. 10% less than standard; 20% - standard....). The presentation of the blocks were randomised.
The triggers were named 1 to 5 for the standard stim, and 21to 26 for the deviants. Also, in order to differentiate among the blocks, there were included markers (named from 31-60). So, for example:
- block 1: starts with marker 31 (which corresponds to deviant 21 for an audio block), and then the actual triggers: e.g. 1; 1; 1; 21; 1; 1;21....
- block 2: marker 37 (correspond deviant 21 for visual block), and then the triggers: 2; 21; 2; 2; 21; 2; 21....
So, I don't know how to create an eventlist that can identify/recode the deviant stim regarding the block they belong. If I create a simple eventlist, I'm unable to compare specific ERPs (e.g. differences between standard and deviant-21(duration 10%less than deviant) for the auditory block).
thanks in advance
My experiment is regarding showing some visual stimulus to people while recording the EEG. The stimulus is of 3 types. One is facial recognition and other two is a simple one: Showing an object moving around. I want to extract the different ERP components from this experiment. I have BioRadio EEG device. It can record the voltages value.
I understand that ERP data requires performing an experiment by averaging it 30-50 times. How is this averaging done ? How do I extract the ERP components from the averaged value ? Please any help would be very helpful.
Hi everyone! I need to plot ERPs from an EEGLAB study with the mean error displayed as a shade around the waveforms. I'd like to ask if anybody knows the specific syntax for doing this, or if there is an eeglab version with this function built in the GUI.
Thank you so much!
We have have Enterprise Resource Planning in our university. The ERP composed of several modules to automate different routines ranging from administrative to academic.
Business processes and technology should go with, agreeing to each other harmoniously. Businesses and companies will benefit to the maximum extent. Pl comment
I have a study about how auditory evoked potentials can discriminate between healthy and mild cognitive impairment older adults. I have done a repeated measures ANOVA and I have performed ROC analyses under those signicant different variables between the groups.
I would like to perform a cross-validation of my model to be able to obtain an estimation of the prediction error. I have been said that a k-fold cross validation could be a good method. However, I don't know how to proceed with this analysis, not even if it is possible to do this with an ANOVA and the ROC curves, or if I need to do a new type of anlayses such a regression or discriminant analyses. Morevover, I don't know what kind of software I can use (SPSS, Matlab, ....).
I hope somebody could help me with this. I would really appreciate any input here.
Thank you in advance.
We use a 128 electrode array, but I would also be curious about 64. I am less curious about lower density arrays since bad electrodes should be relatively rare. I am also guessing the type of analyses being run are significant factors (ERP, Time-Frequency, etc.). Thank you!
According to ERP article from Wikipedia, it is mentioned that "To see the brain's response to a stimulus the experimenter must conduct many trials and average the results together, causing random brain activity to be averaged out and the relevant waveform to remain, called the ERP". Does it mean we need to show the visual stimulus to the subjects a number of times, create epochs of the individual trials and then the averaging ?
I am a beginner in EEG research. I have ERP data (obtained by using visual stimulus and EEG electrodes). I want to evaluate these features but i do not know where to begin from ? Is there some algorithm or software that would help to extract these different ERP components ?
Really need some good suggestions. Thank you
My experiment includes two groups and two conditions, and I would like to compare between groups and conditions. To do so, I created a study design, uploaded the data sets and could make a comparison between groups and conditions at the amplitude level. My question is, can EEGLAB calculates the significant differences with ERP latency?
Are cognitive control and cognitive effort the same thing?
I am confused. In the neuroscientific literature on second language learning, the neural organization of cognitive control and cognitive effort seems to overlap (e.g., the anterior cingulate, dorsolateral prefrontal cortex, inferior parietal regions). At the same time, I have come across some results that make me think that cognitive control and cognitive effort are not the same thing.
For instance, I have found fMRI studies that reported increased "cognitive effort" (and therefore more widespread fMRI activation) in less proficient speakers of a second language (e.g., Abutalebi et al. 2018). At the same time, several ERP studies have suggested more "cognitive/language control" in highly proficient second language users. For example, Fernandez et al. (2013) have shown that higher second language proficiency was associated with a greater mean N2 amplitude (greater inhibition) on an executive function test. Another example: Rossi et al (2018) concluded that individuals with high second language proficiency require more cognitive/language control for their first language, even before they speak their second language.
I will greatly appreciate your help.
With my best wishes,
I have a question regarding synchronization of artifact rejection in eeglab. I use the version: "eeglab 14_1_1b" (MATLAB plugin)
I work with epoched data. To find artifacts like blinks and eye movements/saccades I have used the built-in ICA function and followed the recommended guidelines from Chaumon et al. 2015 to pick out components. Furthermore I have used the "simple voltage treshold" til sort out any voltage below -75 microvolt and voltage above 75 microvolt.
My issue arises when I want to synchronize the artifact info in EEG and EVENTLIST. This is a must do step in order for me to compute the average ERPs. eeglab won't let me do this, and it brings me this exact message when i try to synchronize:
"It looks like you have deleted some epochs from your dataset. At the current version of ERPLAB, artifact info synchronization cannot be performed in this case. So, artifact info synchronization will be skipped, and the corrosponding averaged ERP will rely on the information at EEG.event only. Do you want to continue anyway
(yes, no)." I need to find a solution, so that every rejected trails and removed components will be removed from the dataset when averaging the ERPs.
I have not been able to find any version of eeglab which seems to solve the problem.
Hi everyone! I'm preparing an ERP study of written sentence processing in school aged children. My target population are children between 8 and 11 years old, and I'm aiming to examine N400 and P600 effects after semantic and syntax violations. Sentences will be displayed on the center of a computer screen, word by word (rapid serial visual presentation). I wanted to ask your opinion about what would be the optimal SOA (stimulus onset asynchrony) to maximize the probability of actually seeing the language-related ERP effects, as it has been shown that presentation rate may affect the magnitude of these potentials, at least in certain populations (like L2 and older adults).
Thanks for your kind attention!
It seems my data are suffering from alpha distortion. It is possible to reduct it from some data with the help of ICA but for half of the participant ICA does not work that well.
So for now I tried to reject more components than usual (15-18 out of 64 ICs) and run the ICA again if it can catch the alpha. But it doesn't work.
In a nutshell, I wonder what are the best ways to reduce the alpha disturtion from data
Note: I also heart that PARAFAC seems to work better then ICA for reduction of Alpha but I cannot find a way to apply PARAFAC with EEGLAB GUI
- Typical maximum radiated power of each type of antennas is how ?
- Does it linearity with the power input of antennas ?
Effective Radiated Power (ERP)
ERP (dBm) = Power of transmitter (dBm) – loss in transmission line (dB) + antenna gain in dBd
For example, I search on internet someone said that:
–With a ceramic substrate, good heat resistant bonding of the copper layer, a low loss and a high impedance - it should survive way more than 70W for microstrip type antenna.
So, i need of a list of antenna types and its' typical maximum radiated power it can handle.
Thanks for any comments or advices
I wonder how many ICA component do you generally eleminate aproximately? Do you think eleminating 25 out of 64 component sounds reasonable.
We know that:
- Tools and objects are represented both in visual areas and in sensorimotor areas
- Tools are identified more easily when we observe them being used in a canonical way
- They can extend our functional space when used Article Tools for the Body (schema)
We also know that:
- Objects or action recipients are identified more easily when visually presented along their tools in a canonical way
- Graspable objects elicit faster and enhanced components than non-graspable objects or graspable objects presented outside of reach (TMS + ERPs)
However, I cannot find results suggesting that whilst holding and using a tool (e.g., a screwdriver) the action recipient (the object on which the tool is meant to be used; e.g., a screw) will be identified more easily than another object on which the tool isn't of any use (e.g., a pencil). Furthermore, I wonder whether visual features of the action recipient (e.g., its color) could be identified faster whilst holding and using the tool as compared to if that feature was on another object.
I want to calculate the difference between sources in two conditions.
but which procedure is correct?
1- calculating the difference of sources in each subject and then grand averaging
2- calculating the sources of each condition from grand average ERPs and then subtracting one from the other.
After we acquisition EEG signal, How can ensure that the Data comply with relevant quality standards? and is reliable? And similar to it when we process a signal how can ensure that we correctly and completely do it? And on the other hand, didn't we remove Intrinsic property of data?
Also, After they record EEG/fMRI simultaneously, How perform QC?