Science topic
Neural Networks - Science topic
Everything about neural networks
Questions related to Neural Networks
The decision-making process in Neural Networks poses a significant challenge known as the 'Black Box' problem. NNs are compelling in various applications, but issues could arise when accountability becomes crucial. How can one address the challenge of ensuring that a model is free from decision-making biases, and to what extent does this challenge affect the entire industry? Are there any papers or books that delve into the 'Black Box' problem and provide insights into ensuring that NNs make unbiased decisions?
Iam planning to do some literature work on rationale neural networks and functionalities of activation functions like sigmoid and others, please recommend me some effective articles related one
I am preparing my Bachelor final thesis in computer engineering. I am currently planning out the work. My idea is to compare traditional approaches to building recommender systems to Graph Neural Network based approaches. The plan so far is to use the Movie Lens 100k dataset, which contains data on users, movies, and user-movie ratings. The task of the recommender system would be to predict the missing ratings for user A and recommend movies based on that (say top 5 highest predictions). I would present three approaches to this task:
- Traditional content-based filtering approach
- Traditional collaborative filtering based approach
- Graph Neural Network
Given this very general outline, would you guys say that this seems like a good project idea? The movie lens dataset seems to be quite popular when it comes to experimenting with GNN's, but you can suggest a better dataset for this setup.
For details on the current OpenAI leadership situation see e.g.
How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Can you explain the concept of the vanishing gradient problem in deep learning? How does it affect the training of deep neural networks, and what techniques or architectures have been developed to mitigate this issue?
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I decided to learn about such an area as the use of neural networks in econometrics, regardless of subsequent employment. One PhD explained to me that:
"In econometric research, the explainability of models is important; neural networks do not provide this. For time series, neural networks can be used, but only with a special architecture, for example, LSTM. For macroeconomic forecasting tasks, as a rule, neural networks are not used. ARIMA/SARIMA, VAR, ECM are used."
But on one forum they explained to me that
"A typical task in the field of time series analysis is to predict, from a sequence of previous values of a time series, the most likely next/future value. The Large Language Model (LLM), which underlies the same ChatGPT, predicts which word or phrase will be next in a sentence or phrase, i.e. in a sequence of words in natural language. The current ChatGPT is implemented using so-called transformers - neural networks, which after 2017 began to actively replace the older, but also neural network and also sequence-oriented LSTM (long short-term memory networks) architecture, and not only in text processing tasks, but also in other areas."
That is, the use of transformers in time series forecasting may seem promising? It seems that now this is a relatively young industry, still little studied?
Dear Expert,
I used the Neural Network in MATLAB using inputs [10*3] target data [10*1] and one Hidden layer [25 neurons]. then How can I create an equation that correctly estimates the predicted target??
(Based on the ANN created, weights, biases, and related inputs)
Is there a method, tool, or idea to solve this issue and create one final equation that predicts the output?
I am new to machine learning, I working on regression neural network to prediction the outcomes of my experiments. I created the neural network with a hidden layer to predict my outcomes, now i have to tune the hyperparameter to optimize the NN.
Could you elaborate on the difficulties and obstacles that arise when training deep neural networks, and how researchers and practitioners have attempted to address these challenges?
How to calculate the RMSE value especially (testing and training values) of artifical neural network by using spss ? in the output is parameter estimate heading especially output value is act as a testing and predicted value under the input layer is training ? i am attaching my parameter estimates table output for more clear understanding about it.
I constantly use genetic algorithm and neural network , if you know and examine a better method to find when the data is high dimensional .
Isn't this how humans learn? First remember some things, then make some guesses about new things based on existing memories, just like a neural network?
So, do you feel that the current path of deep learning can lead to AGI (Artificial General Intelligence)?
The intersection of neuroscience, electronics, and AI has sparked a profound debate questioning whether humanity can be considered a form of technology itself. This discourse revolves around the comparison of the human chemical-electric nodes—neurons, with the nodes of a computer, and the potential implications of transplanting human consciousness into machines.
Neurons, as the elemental building blocks of the human brain, operate through the transmission of electrochemical signals, forming a complex network that underpins cognitive functions, emotions, and consciousness. In contrast, computer nodes are physical components designed to process and transmit data through electrical signals, governed by programmed algorithms.
The notion of transferring the human mind into a machine delves into the essence of human identity and the philosophical nuances of consciousness. While it may be feasible to replicate certain cognitive functions within a machine by mimicking neural networks, there are profound ethical and philosophical implications at stake.
Critics argue that even if a machine were to replicate the intricacies of the human brain, it would lack essential human qualities such as emotions, subjective experiences, and moral reasoning, thus failing to encapsulate the essence of human consciousness. Furthermore, the concept of integrating the human mind with machines raises complex questions about the nature of identity and self-awareness. If the entirety of a human mind were to be transplanted into a machine, the resulting entity may no longer fit the traditional definition of human, but rather a hybrid of human cognition and artificial intelligence.
On the other hand, proponents of merging human minds with machines foresee the potential for significant advancements in AI and neuroscience, suggesting that through advanced brain-computer interfaces, it might be possible to enhance human cognition and expand the capabilities of the human mind, blurring the boundaries between organic and artificial intelligence.
As the realms of electronics and AI continue to evolve, the question of whether humanity itself can be perceived as a form of technology remains a deeply contemplative issue. It is imperative that as these technological frontiers advance, ethical considerations and respect for human values are prioritized, ensuring that any progression in this field aligns with the preservation of human dignity and integrity.
The advancement of technology and the intricacies involved in simulating human cognitive processes suggest that it might be plausible for machines to exhibit emotions akin to humans. As the complexity of AI systems increases, managing a vast number of nodes and intricate algorithms could potentially lead to unexpected and seemingly irrational behaviors, which might even resemble emotional responses.
Similarly to how a basic machine operates in a predictable and precise manner devoid of human characteristics, the proliferation of complexity in a machine's structure could lead to the emergence of seemingly irrational or emotional behaviors. Managing the intricate interplay between a multitude of nodes might result in the manifestation of behaviors that mimic emotions, despite the absence of genuine human experience.
These behaviors could be centered around learned and preprogrammed principles, allowing the machine to respond in a manner that mirrors human emotions.
Moreover, the ability to simulate emotions in machines has gained traction due to the growing understanding of the role of neural networks and the intricate interplay of various computational elements within AI systems. As AI models become more sophisticated, they could feasibly process information in a way that mirrors the human emotional experience, albeit based on programmed responses rather than genuine feelings.
While the debate about whether machines can truly experience emotions similar to humans remains unsettled, the increasingly complex and interconnected nature of AI systems hints at the potential for machines to display a form of emotive behavior as they grapple with the challenges of managing a multitude of nodes and algorithms.
This perspective challenges the conventional notion that emotions are exclusively tied to human consciousness and suggests that with the advancement of technology, machines might exhibit behaviors that closely resemble human emotions, albeit within the confines of programmed and learned parameters.
In the foreseeable future, it is conceivable that machines will surpass the human mind in terms of node count, compactness, and complexity, operating with heightened efficiency. As this technological advancement unfolds, it is plausible that profound questions may arise regarding whether the frequencies generated by the human brain are inferior to those generated by machines.
I now that alot of artificial networks has appeared now. And may be soon we wil not read articles and do our scientific works and AI will help us. May be it is happening now? Wat is your experience working with AI and neural networks in science?
Hello everyone! I am studying Graph Neural Networks to apply to my field.
My problem: I have a dataset with multiple graphs. Each node in a graph have Y label. I want to predict Y label of nodes in new graph.
I want to ask: I can make predictions by Graph Neural Network? If can, Could you give me some hints?
Below is a illustration about my question.
Thank you!

The experiment conducted by Bose at the Royal Society of London in 1901 demonstrated that plants have feelings like humans. Placing a plant in a vessel containing poisonous solution he showed the rapid movement of the plant which finally died down. His finding was praised and the concept of plant’s life has been established. If we scold a plant it doesn’t respond, but an AI bot does. Then how can we disprove the life of a Chatbot?
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz

I have deep neural network where I want to include a layer which should have one input and two outputs. For example, I want to construct an intermediate layer where Layer-1 is connected to the input of this intermediate layer and one output of the intermediate layer is connected to Layer-2 and another output is connected to Layer-3. Moreover, the intermediate layer just passes the data as it is through it without doing any mathematical operation on the input data. I have seen additionLayer in MATLAB, but it has only 1 output and this function is read-only for the number of outputs.
Which new ICT information technologies are most helpful in protecting the biodiversity of the planet's natural ecosystems?
What are examples of new technologies typical of the current fourth technological revolution that help protect the biodiversity of the planet's natural ecosystems?
Which new technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are helping to protect the biodiversity of the planet's natural ecosystems?
How do new Big Data Analytics and Artificial Intelligence technologies, including deep learning based on artificial neural networks, help protect the biodiversity of the planet's natural ecosystems?
New technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are finding new applications. These technologies are currently developing rapidly and are an important factor in the current fourth technological revolution. On the other hand, due to the still high emissions of greenhouse gases generating the process of global warming, due to progressive climate change, increasingly frequent weather anomalies and climatic disasters, in addition to increasing environmental pollution, still rapidly decreasing areas of forests, carried out predatory forest management, the level of biodiversity of the planet's natural ecosystems is rapidly decreasing. Therefore, it is necessary to engage new technologies, including ICT information technologies, technologies categorized as Industry 4.0/Industry 5.0, including new technologies in the field of Big Data Analytics and Artificial Intelligence in order to improve and scale up the protection of the biodiversity of the planet's natural ecosystems.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How do the new technologies of Big Data Analytics and artificial intelligence, including deep learning based on artificial neural networks, help to protect the biodiversity of the planet's natural ecosystems?
Which new technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are helping to protect the biodiversity of the planet's natural ecosystems?
What are examples of new technologies that help protect the biodiversity of the planet's natural ecosystems?
How do new technologies help protect the biodiversity of the planet's natural ecosystems?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I have built a feed-forward fully connected neural network. Trying to specify its fitness function, I read a review paper by Ojha et al. (2017). The authors suggest including the accuracy of both training and test data sets in the fitness function (an evaluation metric), by which you could evaluate the performance of the neural network.
Considering that we build the neural network based on the training data set, I was wondering why we should include its accuracy (i.e., training accuracy) in the fitness function/evaluation metric. Why should the evaluation metric of parameter tuning not rely only on the test/validation accuracy?
How can artificial intelligence break through the existing deep learning/neural network framework, and what are the directions?
Activation functions play a crucial role in the success of deep neural networks, particularly in natural language processing (NLP) tasks. In recent years, the Swish-Gated Linear Unit (SwiGLU) activation function has gained popularity among researchers due to its ability to effectively capture complex relationships between input features and output variables. In this blog post, we'll delve into the technical aspects of SwiGLU, discuss its advantages over traditional activation functions, and demonstrate its application in large language models.
I want to develop a system based on the neural network that can accurately and fast recognize human actions in real-time, both from live webcam feeds and pre-recorded videos. My goal is to employ state-of-the-art techniques that can handle diverse actions and varying environmental conditions.
I would greatly appreciate any insights, recommendations, or research directions that experts could provide me with.
Thank you so much in advance.
Currently, I am exploring federated learning (FL). FL seems going to be in trend soon because of its promising functionality. Please share your valuable opinion regarding the following concerns.
- What are the current trends in FL?
- What are the open challenges in FL?
- What are the open security challenges in FL?
- Which emerging technology can be a suitable candidate to merge with FL?
Thanks for your time.
Discussion of issues related to the use of Neural Network Entropy (NNetEn) for entropy-based signal and chaotic time series classification. Discussion about the Python package for NNetEn calculation.
Main Links:
Python package
I plan to use the dataset to train my convolutional neural network based project.
I am seeking to extract a mathematical equation for each output of my neural network. After conducting research, I discovered that in python, this can potentially be achieved using libraries like gplearn. I have already trained an Artificial Neural Network (ANN), and I am eager to apply this approach to my model. Can anyone offer assistance or guidance on how to accomplish this?
Given the results of my mathematical calculations, it is imperative that I obtain the corresponding equations from my neural network to proceed with further computations and achieve accurate outcomes.
Hello,
A data augmentation technique called ''elastic deformation'' is used in the U-Net paper to help the network learn invariance to deformations, without the need to see these transformations in the annotated image corpus.
On the 6th page of the U-Net paper, the following sentences are written:
"We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation."
What is the basis for assuming sampling displacements of pixels from Gaussian distribution? Is it based on the physics of images? or can better distribution be simulated?
U-Net paper:
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Hello everyone,
How to create a neural network with numerical values as input and an image as output?
Can anyone give a hint/code for this scenario?
Thank you in advance,
Aleksandar Milicevic
1. Convolutional Neural Networks (CNNs)
2. Random Forests (RF)
3. Support Vector Machines (SVM)
4. Deep Neural Networks (DNN)
5. Recurrent Neural Networks (RNN):
Which one is More Accurate for LULC?
how to implement neural network for 2D planar robotic manipulator to estimate joint angles for a commanded position in a circular path? and how to estimate its error for defined mathematical model and neural network model in a circular path??
I am trying to update parameters of Bayesian neural network using HMC algorithm. However I am getting the error shown below:
ValueError: Encountered `None` gradient.
fn_arg_list: [<tf.Tensor 'mcmc_sample_chain/trace_scan/while/smart_for_loop/while/simple_step_size_adaptation___init__/_one_step/mh_one_step/hmc_kernel_one_step/leapfrog_integrate/while/leapfrog_integrate_one_step/add:0' shape=(46, 1) dtype=float32>]
grads: [None]
I want to make a model to simulate the state performance of a certain grid point in a cell body in different states. I use a neural network model to build it. I don't know how to divide the finite elements reasonably. That is, with a grid point as the center, the problem of selecting its adjacent points.
I wanted to download AlexNet and VGG-16 CNN models that have been pre-trained on medial images. It could be pre-trained for any particular mdeical image related task like segmentation, recognition, etc. It should preferably handle medical images of various modalities. Are there any such models which are publicly available?
My Dear
I have a series as y (40 values from sales) and need to use neural networks in a matlab symlink to forecast the future values of y as in times(41,42,43,......50)
What is the nature of consciousness and how it arises from the physical processes of the brain?
Consciousness refers to our subjective experience of awareness, sensations, thoughts, and perceptions. It involves the integration of information from various sensory inputs and internal mental processes. Despite significant advancements in neuroscience and cognitive science, the exact nature of consciousness and how it arises from the physical processes of the brain are still subjects of ongoing investigation and debate.
Some of the key questions related to the nature of consciousness include:
- What is the relationship between the brain and consciousness?
- How does subjective experience emerge from neural activity?
- Can consciousness be explained solely by material processes, or does it involve non-physical aspects?
- Are there different levels or types of consciousness?
- What is the nature of self-awareness and the sense of personal identity?
Understanding consciousness has implications not only for neuroscience and cognitive science but also for philosophy, psychology, and even artificial intelligence. Exploring the nature of consciousness can potentially shed light on the fundamental nature of reality, the nature of the mind-body relationship, and our place in the universe.
What are the specific problems to those neural network architectures when it comes down to working with big data?
I would like to ask you about assistance in understanding the application of ANN for controlling PV systems and also if there is a lab suitable to implement my ideas.
I have seen the scale of at least 1000 s for cnn. I know it depends on many factors like the image and its details but is there roughly any estimate that can determine the number of samples is required to apply CNN reliably?
Have you seen 100 of images applied for CNN?
If neural networks adopt the principle of deep learning, why haven't they been able to create their own language for communication today?
In the IRIS dataset (attached), I test with every method like LSVM, QSVM,NARROW NEURAL NETWORK, and WIDE NEURAL NETWORK. For data numbers 71 and 84, the answer is wrong. Could this data be wrong?
What type of deep learning architectures should we prefer while working on CNN models, Standards models such as AlexNET, VGGNET, or Customised models (with user-defined layers in the neural network architecture)?
Finding the best features (sometimes called metrics or parameters) to input into a neural network by trial and error can be a very lengthy process. Classic feature selection methods in machine learning are 'extra tree classifiers', 'univariate feature selection', 'recursive feature elimination', and 'linear discrimination analysis' (a supervised learning version of PCA). Are there other more modern methods that have evolved recently which are more powerful than these?
Inputting too many redundant or worthless features into a neural network reduces the accuracy, as does omitting the most useful features. Restricting the neural network input those most relavant features is key to getting the highest accuracy from a neural network.
OpenAI Chief Ilya Sutskever noted that neural networks may be already conscious. Would you agree?
Any researcher who wants to collaborate on a manuscript related to neural networks, I can help him. I have experience with neural networks.
ما هي تطبيقات الشبكات العصبية في الاتصالات؟
Majorly in the machine learning and neural network sectors under ai
كيف تختار طريقة القياس المستخدمة في الشبكات العصبية؟
ما هي الأنواع المختلفة للشبكات العصبية العميقة؟
لماذا من المهم تضمين اللاخطية في الشبكات العصبية؟
ما هي الشبكات العصبية وكيف ترتبط بالذكاء الاصطناعي؟
What is the impact of varying the number of hidden layers in a deep neural network on its performance for a specific classification task, and how does this impact change when different activation functions are used?