Science topic
Neuroinformatics - Science topic
Explore the latest questions and answers in Neuroinformatics, and find Neuroinformatics experts.
Questions related to Neuroinformatics
IEEE 2024 4th International Conference on Artificial Intelligence, Robotics, and Communication (ICAIRC 2024) will be held in Xiamen, China on December 27-29, 2024.
Conference Website: https://ais.cn/u/JBJFnm
ICAIRC 2024 aims to be the premier global forum for presenting, discussing, and promoting cutting-edge advancements in intelligent robot control systems and wireless communication. With a focus on the integration of artificial intelligence, natural computing, and evolutionary-inspired computing within wireless control systems for telerobotics, the conference seeks to foster international collaboration among industry experts, researchers, and academics. Attendees will have the opportunity to engage with groundbreaking research, participate in in-depth discussions, and utilize extensive networking opportunities, all designed to drive innovation and academic excellence in these dynamic and rapidly evolving fields. The event will feature keynote addresses from eminent industry leaders, interactive sessions, and workshops that encourage forward-thinking and collaborative breakthroughs.
---Call for papers---
The topics of interest for submission include, but are not limited to:
◕ Artificial Intelligence
· Artificial Intelligence Applications & Technologies
· Artificial Neural Networks
· Artificial Intelligence tools & Applications
· Bayesian Networks
· Neuroinformatics
......
◕ Robotics Science and Engineering
· Robot control
· Mobile robotics
· Intelligent pension robots
· Mobile sensor networks
· Perception systems
......
◕ Communication
· Optical Communications
· Wireless Communications and Technologies
· High-Speed Networks
· Communication Software
· Ultra-Wideband Communications
......
---Publication---
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore’s scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
---Important Dates---
Full Paper Submission Date: November 30, 2024
Registration Deadline: December 7, 2024
Full Paper Submission Date: December 14, 2024
Conference Dates: December 27-29, 2024
--- Paper Submission---
Please send the full paper(word+pdf) to Submission System:
会议征稿:第四届人工智能、机器人和通信国际会议(ICAIRC 2024)
Call for papers: IEEE 2024 4th International Conference on Artificial Intelligence, Robotics, and Communication (ICAIRC 2024) will be held in Xiamen on December 27-29, 2024.
Conference website(English): https://ais.cn/u/3aMje2
重要信息
大会官网(投稿网址):https://ais.cn/u/3aMje2
大会时间:2024年12月27-29日
大会地点:中国-厦门
收录检索:IEEE Xplore, EI Compendex, Scopus
会议详情
第四届人工智能、机器人和通信国际会议(ICAIRC 2024)定于2024年12月27-29日在中国厦门举行。会议旨在为从事“人工智能、机器人和通信”研究的专家学者、工程技术人员、技术研发人员提供一个共享科研成果和前沿技术,了解学术发展趋势,拓宽研究思路,加强学术研究和探讨,促进学术成果产业化合作的平台。大会诚邀国内外高校、科研机构专家、学者,企业界人士及其他相关人员参会交流。
征稿主题(包括但不限于)
1. 人工智能
人工智能应用与技术
人工神经网络
人工智能工具与应用
贝叶斯网络
神经信息学
机器人
数据挖掘
......
2. 机器人科学与工程
机器人控制
移动机器人
智能养老机器人
移动传感器网络
感知系统
微型机器人和微型操纵
视觉服务
搜索、救援和现场机器人
机器人传感与数据融合
......
3. 通信
光通信
无线通信和技术
高速网络
通信软件
超宽带通信
多媒体通信
密码学和网络安全
绿色通信
移动通信
会议论文出版
ICAIRC 2024所有的投稿都必须经过2-3位组委会专家审稿,经过严格的审稿之后,最终所有录用的论文将由IEEE出版(ISBN号:979-8-3315-3122-5),收录进IEEE Xplore数据库,见刊后由期刊社提交至EI 、SCOPUS收录。
参会方式
—— 每篇录用缴费的文章,允许一名作者免费参会 ——
(1)口头汇报:10-15分钟的全英PPT演讲;
*开放给所有投稿作者与自费参会人员;针对论文或者论文里面的研究做一个10-15min的英文汇报,需要自备PPT,无模板要求,会前根据会议邮件通知进行提交,详情联系会议秘书。
(2)海报展示:自制电子版海报,会议安排展示;
*开放给所有投稿作者与自费参会人员;格式:全英-A1尺寸-竖版,需自制;制作后提交海报图片至会议邮箱icairc@163.com,主题及海报命名格式为:海报展示+姓名+订单号。
(3)仅参会:非投稿作者,现场听众参会。
*仅开放给自费参会人员,(3人及以上)组队参会优惠请联系会议秘书。
(4)报名参会:https://ais.cn/u/3aMje2
I am working on a multi-site study and have a few scripts to convert the dicoms to nifitis (in BIDs format). However, one of our sites uses a GE scanner and its dicoms are not aliased correctly so the nifiti formats are not being outputed correctly for EPI and DWI sequences. Therefore, I have to use the p-files from GE scanners, but I do not know of a way to convert them to dicoms. Does anyone know of software or have a script to convert the p-files to dicoms?
This is important so I can harmonize this site with other one and run everything through the same pipelines. I need to convert these p-files to dicoms not nifitis.
I am interested in asking the question which neural properties from a given modality are most strongly correlated with a given set of behaviors. For example, does functional abnormalities predict depression severity over and beyond the structural abnormalities (using a hypothesis-based or data-driven approach)? Further, how does the dynamics between modalities change throughout development and with respect to age of onset? To answer these questions, I am using T1-weighted structural scans, resting-state bold scans, diffusion weighted imaging, and arterial spin labeling imaging.
I would appreciate any general tips, especially in regards to neuroinformatics. Some of these beginning questions I have are as follows:
1) Would I have to use the same anatomical atlas or factorization (ICP/NMF) across all modalities, so that I can have a direct comparison across modalities?
2) Would I have to limit modeling all the neural properties of a given region/network one at a time, or could I run a multiple regression including the region/network that is most strongly related to depression for a given modality regardless of whether they physically overlap?
3) Are there multi-modal pipelines/software that I should use to process the raw MRI scans?
Dear All,
I want to do a feature selection for predicting the kinematics characteristics (we have a total of 9) of tennis shots that better predict an outcome variable called quality of the shot (with three values, 1, 2 or 3, I think it is better not to go into details). I think it is a good idea to use ANOVA to select features (below I add some references that use this test to select variables for creating a prediction model) . But I have a question… My independent variables (kinematic variable 1, 2,..., 9) are continuous and my dependent variable (quality of the shot) is ordinal. It is correct if we select the better predictive variables comparing the kinematics variables for the three quality level of shots? (ANOVA comparing kinematic variable 1, 2, 3,...,9 between shot of quality 1, shot of quality 2 and shot of quality 3). Or it is better to categorize the independent variables (for example speed of the hand could be categorized as slow, medium and fast using terciles) and then compare the dependent variable (level of the shot) in those level of the independent variable (slow, medium and fast hand movement) using for example Kruskal Wallis.
Other ficticious example to make my question more undenstable. Imagine that we want to predict which of three anthropometric characteristics predict the level of the basketball player (beginner, competitive or elite): height, hip height or wingspan. It is correct to compare this characteristics between those three groups and select the better predictive variable of the level selecting the variables with the higher F values? Or it is better to categorize the independent variables (height, hip heigh and wingspan) and compare in each one the dependent variable with kruskall Wallis and then selecting the minor p values?
Thanks very much,
Gabriel
References:
Yamashita, Alexandre Yukio, et al. "The Residual Center of Mass: An Image Descriptor for the Diagnosis of Alzheimer Disease." Neuroinformatics (2018): 1-15.
Chen, Yi-Wei, and Chih-Jen Lin. "Combining SVMs with various feature selection strategies." Feature extraction. Springer, Berlin, Heidelberg, 2006. 315-324
Surendiran, B., and A. Vadivel. "Feature selection using stepwise ANOVA discriminant analysis for mammogram mass classification." International J. of Recent Trends in Engineering and Technology 3.2 (2010): 55-57.
Elssied, Nadir Omer Fadl, Othman Ibrahim, and Ahmed Hamza Osman. "Research Article A Novel Feature Selection Based on One-Way ANOVA F-Test for E-Mail Spam Classification." Research Journal of Applied Sciences, Engineering and Technology 7.3 (2014): 625-638.
Zaki, Wan Mimi Diyana Wan, et al. "Automated pterygium detection method of anterior segment photographed images." Computer methods and programs in biomedicine 154 (2018): 71-78.
Ali Khan, Sajid, et al. "Kruskal-Wallis-based computationally efficient feature selection for face recognition." The Scientific World Journal 2014 (2014).
I want to do association mining of those proteins.
any researchers doing neuroIS research in Malaysia?
our research group here in UTeM would be happy to collaborate on related projects.
I am encountering such a potential bias in my estimates of coherence as a function of window size in my neuroscience data (LFP recordings from PFC). Specifically, when I use a shorter window, the estimated coherence peak shifts a higher frequency. (I am varying the window size because I am generating a spectrogram)
The time series data are not necessarily stationary. However, prior to estimating coherence, I subtract the trial-average signal from each trial in order to ensure that my data is zero-mean.
I understand that variations in window size can bias spectral estimates in general. For example, spectral estimates of pure sinusoids will appear instead as broader sinc functions for shorter windows and more delta-like for long windows.
How can shortening the window size produce such a frequency shift in coherence estimates?
Hello everybody
How can I replace the Artificial Neurons (ex.:Stochastic Binary) with a Spiking Neuron (ex.: lif or HH or ... ) in an architecture (ex.: RBM)?
There are loads of mnemonic-techniques out there ranging from method of loci to musical mnemonics. It is my understanding and experience that mnemonics can increase the amount of information learned and prolong the period in which it can be recalled. I think musical mnemonics is a great example since one can easily realize the vast amount of lyrics from regular songs that one can recall. Imagine if these songs were enconded with meaningful information...
Hello -
I plan to screen neural circuits for a certain behavior using GAL4/UAS system. I am unsure about which toolkits I should take to activate or inactivate neurons. From what I can find, people usually use TNT/TrpA/kir2.1/etc for such procedure. For my purpose, it seems more reasonable to block neuron activity rather than activate it. So I guess I need to choose between TNT and kir2.1. Do anyone know the differences between these two?
Thanks,
Yang
- Creativity is the intellectual ability to make creations, inventions, and discoveries that brings novel relations, entities, and/or unexpected solutions into existence [Wang, 2009, 2013]. Creativity is a gifted ability of humans in thinking, inference, problem solving, and product development.
- The cognitive foundation of creativity is a new and unusual relation, neurophysiologically represented by a synaptic connection, between two or more objects that generates a novel and meaningful concept, solution, method, explanation, or product.
- As a cognitive process, the first-phase of creativity is search-based for discovering a novel relation; while the second-phase of creativity known as justification is inductive and logical.
Statistics shows that the ratio of right-/left-handers is 90% vs. 10% in the population.
The latest finding in neuroinformatics indicates that human long-term memory (LTM) is retained as a neurophysiologic network with unconsciously built physiological synaptic connections at each node as a gated diode in electronic circuits [Wang & Fariello, 2012; ICIC, http://www.ucalgary.ca/icic/]. Therefore, there seems no mechanism to intentionally (consciously) remove or forget such a connection once it is established in LTM except a potential precise surgery in the future. However, one may intentionally bypass a piece of (unwanted) memory or loss the pathes to access a certain memory, for instance, because of aging.
With regards to short-term memory (STM), however, we still don’t fully understand the mechanisms of how we unconsciously and selectively forget certain information but memorize the rest by permanently retaining them into LTM. This is the key mechanism of learning. Guess what one can do if it’s consciously controllable one day.
Miller’s principle seemed only fit to people’s ordinary ability for remembering random and trivial numbers rather than the capacity of the entire short-term memory (STM)? Modern brain imaging technologies have revealed that the active areas of human STM during thinking cover a vast scope of the frontal lobe of cortex, whose magnificent is several orders higher than that of a few digits.
The latest finding about the capacity of STM is much greater than Miller’s estimation, because a thinking thread in the mind (STM) may involve hundred and even thousand concepts and data items necessary in complex problem-solving [Wang, 2013; ICIC, http://www.ucalgary.ca/icic/]. It is also observed that STM is the only significantly growing area in the adult brain, which can be billions of bits even larger dependent on education, training, and demand of usage.
Thinking and reasoning are usually inference- and causality-based where people explore a tremendous knowledge space using their own brains. This practice has become a driving force for brain development by exposing it to very complex and difficult problems.
However, nowadays, think and reasoning of kids tend to be search-engine-based in the cyberspace mainly outside the brain. Knowledge in the brain of the young generation is dominated by more web links rather than the entities and causality themselves. Although it would be efficient and convenient, can search provide any answer for unknown problems or solutions in the cyberspace? Further, when there is no search engine available or there is no suitable searching result, can reasoning be carried out smoothly not as that of a GPS-based driver troubles in a new territory without the GPS?
There are more and more evidences which indicate that some kids may not live comfortably without the web and search engines. Will this habit hinder the power of human brains to deal with really complex, hard, and creative (answer-unknown) problems in the future? Will there be a serious impact on the intelligence power and ways of thinking of the young generation?
It’s found in neuroscience and neuroinformatics, that there are only three basic forms of neurons known as the association, sensory, and motor neurons in human nervous system [Marieb, 1992; Wang and Fariello, 2012]. Among them, 90+% neurons in the central nervous system of human brain are association neurons, which form the neurophysiological foundation for the natural intelligence.
However, the ANN model [Hopfield, 1982] is much closer to the sensory neurons as a transducer of external stimuli and with nothing to do for reasoning and inference. The mathematical model of ANNs is a dynamic weighted sum, or more formally, a general possibly convergentable function, which is far different from any of the basic neural models or their compositions. It is perhaps a good instance which indicates that artificial intelligence (AI) may not be rigorously studied before the natural intelligence (NI) as well as its neurophysiological and cognitive foundations are well understood.
Therefore, in order to rigorously explain the mechanisms of human nervous system towards the natural intelligence, the following question becomes fundamental: - What are the meta-forms of neural circuits where complex neural networks are built based on them?
Here are the specifics:
There current data summary is here: http://nif-services.neuinfo.org/servicesv1/v1/summary?q=*
The other services are based on the id of each database and can be accessed like this
The documentation is here: http://nif-services.neuinfo.org/servicesv1/
*Note we have aligned some data sources into single views like nervous system connectivity (nif-0000-07732-1), annotations (nlx_149407-1) or research animals (nif-0000-08137-1). These sources envelop other sources and those source ids like BAMS (nif-0000-00018) will respond to the identifiers in the query parameter (q=), but not in the source parameter (/nlx-144509-1).
*For the less xml literate the same data can be played with in a gui: https://neuinfo.org/mynif/search.php?q=*
It would be helpful to know if anyone in neuroinformatics would find this useful, and if you don't I would love to know why.
Which cognitive states are responsible for the formation of imagination?
Does imagination form in workspace or is there any other cognitive state which is responsible or imagination itself contains several states?
How imagination is form... How is it helpful in learning mechanism... How can we estimate one action before taking ours.... How can we implement imagination in machines.....