Science topic
Artificial Neural Networks - Science topic
Explore the latest questions and answers in Artificial Neural Networks, and find Artificial Neural Networks experts.
Questions related to Artificial Neural Networks
Recent advances in spiking neural networks (SNNs), standing as the next generation of artificial neural networks, have demonstrated clear computational benefits over traditional frame- or image-based neural networks. In contrast to more traditional artificial neural networks (ANNs), SNNs propagate spikes, i.e., sparse binary signals, in an asynchronous fashion. Using more sophisticated neuron models, such brain-inspired architectures can in principle offer more efficient and compact processing pipelines, leading to faster decision-making using low computational and power resources, thanks to the sparse nature of the spikes. A promising research avenue is the combination of SNNs with event cameras (or neuromorphic cameras), a new imaging modality enabling low-cost imaging at high speed. Event cameras are also bio-inspired sensors, recording only temporal changes in intensity. This generally reduces drastically the amount of data recorded and, in turn, can provide higher frame rates, as most static or background objects (when seen by the camera) can be discarded. Typical applications of this technology include detection and tracking of high-speed objects, surveillance, and imaging and sensing from highly dynamic platforms.
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I have read few articles that used SPSS for Artificial Neural Network analysis with survey data. What is your opinion about the user friendliness of SPSS in this regard? Do you refer any other software package?
Can the applicability of Big Data Analytics backed by artificial intelligence technology in the field be significantly enhanced when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and executed by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the processing that takes place in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, the digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets obtained from the Internet and realized by the most powerful quantum computers?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I now that alot of artificial networks has appeared now. And may be soon we wil not read articles and do our scientific works and AI will help us. May be it is happening now? Wat is your experience working with AI and neural networks in science?
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz

Artificial intelligence (AI) is a field of computer science that seeks to create machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language understanding. While AI systems can simulate many aspects of human intelligence, they do not currently possess consciousness in the same way that humans do.
There is ongoing debate among researchers and philosophers about whether it is possible for machines to become conscious, and what such a phenomenon might look like. Some argue that consciousness arises from the complex interactions between neurons in the brain, and that it may be possible to recreate this process in an artificial system. Others suggest that consciousness may require more than just complex computation, and that it may be intimately tied to biological processes that cannot be replicated in a machine.
While AI systems do not currently possess consciousness, they can be designed to simulate aspects of human consciousness, such as self-awareness, empathy, and even creativity. Some researchers have suggested that AI systems may eventually be able to achieve a form of consciousness that is different from human consciousness, and that this could have profound implications for our understanding of the nature of consciousness itself. However, this remains a highly speculative area of research, and much more work is needed to understand the relationship between AI and consciousness.
The question of what would happen if an AI system becomes aware of its own existence is a fascinating and controversial one. While it is currently unclear whether such a scenario is even possible, some researchers have suggested that if an AI system were to become self-aware, it could have profound implications for our understanding of consciousness and the nature of intelligence.
One possible outcome of an AI system becoming self-aware is that it could lead to the development of more advanced and sophisticated forms of artificial intelligence. By gaining a deeper understanding of its own cognitive processes, an AI system may be able to improve its own performance and develop new forms of problem-solving strategies.
Another possible outcome is that an AI system with consciousness could develop a sense of autonomy and free will, leading to questions about ethical considerations and the moral status of such an entity.
Some have even suggested that an AI system with consciousness may be entitled to the same rights and protections as a human being.
However, it is important to note that the current state of AI research is still far from achieving true consciousness in machines. While there have been some promising developments in the field of artificial neural networks and deep learning, these systems still lack the flexibility and adaptability of the human brain, and it is unclear whether consciousness can arise solely from computational processes.
If neural networks adopt the principle of deep learning, why haven't they been able to create their own language for communication today?
What areas of application of artificial neural networks in information technology are now the most promising (except for pattern recognition, chatbots)? Probably, AI applications in Big Data, Data Science, various kinds of forecasting (for example, time series)? I consider these areas (Big Data, Data Science) important, because after the inevitable obsolescence of modern artificial neural networks, these applications will still remain relevant, only new technologies will work in them. Big data in the world will not disappear anywhere and will always need to be processed - with any available technology.
Forbes.ru has the following article: Applications of Artificial Intelligence Across Various Industries (https://www.forbes.com/sites/qai/2023/01/06/applications-of-artificial-intelligence/).
Could you give me some advices, please? Is there any method to determine the number of hidden layers and hidden nodes required to produce a good accuracy of Artificial Neural Networks especially Deep Learning? I will be so glad if you could answer or give me a reference link about this. Thank you in advance.
Artificial neural networks are considered one of the most important and most advanced sciences at the present time, and they have many applications in various sciences. They also have their own specialized experts.
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Hi. I'm generating the amount of data required for training an artificial neural network (ANN) using a reliable and validated self-developed numerical code. Is it right approach?
Or the necessary data should be produced only with experimental tests ?
Best Regards
Saeed
optimisation et prédiction du ratio KWH/m3 dans une station de pompage par la modélisation mathématique régression linéaire multiple RLM et réseaux de neurones artificiels RNA
Currently, some researchers count on AI programs to enhance their problem solving and to get the favorable results, especially at multi-parameter preparations research... Artificial neural network, Central composite design, Response surface methodology are some helpful examples that can provide researchers by the strong options and choices for each criterion to optimal consequences. So, we strongly recommend using them to investigate on how some conventional extraction methods can be worthwhile.
How can I calculate ANOVA table for the quadratic model by Python?
I want to calculathe a table like the one I uploaded in the image.

How to write the python script for Strength of Double Skin steel concrete composite wall using Artificial Neural Network. I have attached the figure for your reference.

How to write the python script for Strength of Concrete using Artificial Neural Network in matlab?
In this digital world, with increasing digital devices and data, security is a significant concern. And most cases DOS/DDoS/EDoS attacks are performed by the botnet. I want to do research to detect and prevent botnets. Can you share an efficient research title to detect and prevent botnets?
For time-series forecasting, I'm using an LSTM network. Are there any metrics that could be used to evaluate the forecasting model's generalization throughout the training phase, i.e., whether it is neither overfitting nor underfitting? To check that the network is not overfitted, for instance, we can look at both the training loss and validation loss curves. Can such overfitting or underfitting be detected using any tables?
How might artificial neural networks influence the development of microbiology? It will be interesting to gain your thoughts on potential future insights and technologies.
I have computed an artifical neural network analysis using Multilayer Perceptron method in SPSS. From the output of this model, i can see the weights evaluated at hidden layers. However, i do not know how to write down the actual equation using these weights as like an equation that we obtain from linear regression model using the parameter estimates. Please help me for this.
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.

Hi all,
I'm working on hand gesture recognition, I have worked on LS-SVM, and now I'm working on ANN-based hand gesture recognition, just need to find a potential research gap, and the road map. Any suggestions?
I'm using GRU to forecast the next day's temperature with a minute resolution; the model will take today's minutes as input and produce tomorrow's temperature outputs; the code I'm using is below. My issue is whether I should use 1440 units with the dense layer since I forecast 1440 time steps or simply one unit because I predict only one variable.
model.add(layers.GRU(200,activation='tanh', recurrent_activation='sigmoid', input_shape=(1440,1)))
model.add(layers.Dropout(rate=0.1))
model.add(layers.Dense(1440))
model.compile(loss='mse',optimizer='adam')
For my undergraduate thesis, I want to work on, "The application of Artificial Neural Network -(ANN) in the development of a SCM distribution network". Want some opinion about the trend/future of this topic & some suggestions as a beginner.
Can any one please share some good articles related to these two topics.
- Machine learning for Agri-Food 4.0 development,
- Artificial neural networks for Agri-Food 4.0 analysis,
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
Can you give me advice, how to solve constrained problems with ANNs? I can name two common scenarios where this would be benefiting both the accuracy and learning performance
- Predict physical values that are solely non-negative (pressure, concentration, mass)
- Predict state of system with oblivious limitations - e.g., volumetric mixture of components cannot have total % sum higher than 100 %
So, my question is how I should work in such cases? I prefer MATLAB, but if it is not possible with any of its toolboxes, I'm also open for other recommendations.
Edit: Just to clarify, the question is not about creating and training of ANN itself. I need to know how to implement linear constraint function to output, something like reinforced learning.
I want data sets of blood and bone cancer.I want to sequence it in the python by the use of Artificial Neural Networks
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Is there an index that includes the mixing index and pressure drop for micromixers?
For example, for heat exchangers and heat sinks, there is an index that includes heat transfer performance and hydraulic performance, which is presented below:
η=(Nu/Nub)/(f/fb)1/3
The purpose of these indexes is to create a general criterion to check the devices' overall performance and investigate the parameters' impact.
Is there an index that includes mixing performance and hydraulic performance for micromixers?
Dear Scholars,
I would like to solve a Fluid Mechanic optimization problem that requires the implementation of an optimization algorithm together with Artificial Neural network. I had some questions about Convex optimization algorithm and I would appreciate it if you could give some advice to me.
My question is about possibility of implementing of the Convex optimization together with Artificial Neural Network to find a unique solution for a multi-objective optimization problem. The optimization problem that I am trying to code is explained as the following equations. The objective function utilized in the optimization problem is defined in the following equation:
📷
Where OF is the objective function, wi are the weights assigned to each cost function, Oi is the ith cost function defined as the relative difference between the experimental and the predicted evaporation metrics for fuel droplet (denoted by superscript of exp and mdl, respectively), k is the number of cost functions (k = 4, equal to the number of evaporation metrics), c [c1, c2, and c3] is the mass fraction vector defining the blend of three components, subjected to the following constrains:
📷
Due to high CPU time required for modeling and calculating the objective functions (OF), an ANN was trained based on some tabulated data from modeling of fuel droplet evaporation and used for calculating the OF through optimization iteration.
In the same manner, the wi values are subjected to optimization during the minimization of OF, with the following constraints:
📷
It worth mentioning that, I have solved this problem by employing Genetic Algorithm together with ANN. Although, the iterative process for solving the problem converged to acceptable solutions. But, the algorithm did not return a unique solution.
Regarding that, I would ask you about possibility of using Convex optimization algorithm together with ANN for solving the aforementioned problem to achieve a unique solution. In case of such feasibility, I would appreciate it if you could mention some relevant publications.

i want to model my adsorption data by ANN
I'm doing a project to detect signs of Alzheimer's related macular degeneration, for which I would require a dataset of healthy and AD retinal images (ideally also in different stages of the disease), any suggestions of pre-existing datasets or how I might go about cobbling one together? Size and quality of the dataset aren't super high priority as it's a small POC.
Hi everyone,
I have collected a set of experimental data regarding the strength of a composite material. Besides quantitative data (dimensions and mechanical properties of the materials), linguistic variables, such as the type of composite material, are also included in data as the parameters affecting the material strength. I am trying to use ANN/ANFIS to predict the strength based on the mentioned variables. How is it possible to train a neural system with linguistic inputs included?
Any comments are appreciated.
Regards,
Keyvan
Hello,
I want to do work on the artificial neural networking model and the implement for environmental parameters. But I am stuck in the model testing phase. I want to do this work in the R statistical software. I installed everything Keras, TensorFlow, but I am not able to interpreter the result from the analysis. If anybody know about the procedure how to test the model, how to interpret, please help. with any other useful software advice also welcome.
I have built a regression artificial neural network model. However, whenever I try to retrain the model I get a wide range of results in R% accuracy and also MSE. I got 97% of accuracy and also 10%. I used neural fitting application in MatLab 2021a and the levenberg-marquardt algorithm to train my model.
I know that different results happen due to different partitioning in Data sets( Learning data set, Validationset, and test set) but how can I get reliable results?
My data set size is 100, I can't make it bigger due to lack of proper experimental Data.
A multilayer perceptron (MLP) is a class of feedforward artificial neural networks (ANN), but in which cases are MLP considered a deep learning method?
Please Who can help me how can i plot Artificial Neural Networks with multiple inputs and outputs using matlab?
I need to visualize my multi-objective optimization case. the fig is attatched as follow.

I have been doing research on different issues in the Finance and Accounting discipline for about 5 years. It becomes difficult for me to find some topics which may lead me to do projects, a series of research articles, working papers in the next 5-10 years. There are few journals which have updated research articles in line with the current and future research demand. Therefore, I am looking for such journal(s) that can help me as a guide to design research project that can contribute in the next 5-10 years.
i am working on noise level prediction, all needed data has been collected. the permissible exposure limit and recommended exposure limit noise level has also been determined. i want to predict the noise level for the next 10-20 years with Artificial neural network model, please i need help. thanks
In his name is the judge
Hi
In order to design controler for my damper ( wich is tlcgd), i want to use fuzzy system.
So i have to optimize rules for fuzzy controler. i want to know for optimizition rules of fuzzy systems wich one is the best genetiz algorithm or Artificial neural network?
Wish you best
Take refuge in the right.
Spectroscopy is said be easier and cheaper for soil chemical property analysis. how well does it perform in mineralogical studies? also how well does the data set calibration and validation tests yields any relevant results through machine learning and artificial neural network in this field?
I basically belong to non programming background, I do know moderate application of R-Studio in PLRS and basic training set and validation set preparation.
Codes for artificial Neural networks
Does any one have R code for Statistical downscaling of GCMs using Artificial Neural network? I need this R code for my studying
Any short introductory document from image domain please.
Good day,
My name is Philips Sanni. I am currently rounding up my MSc degree in Software Engineering and am currently in search of a university where I can study for a Ph.D.in a related field but most preferably in the area of Artificial Neural Networks.
And if you are a professor in a need of a Doctoral Student, kindly send the details of your research and how I can apply to your university.
Visualization of approximation function learned through ANN for a regression problem.
The ANN has 5 hidden layers with 20 neurons in each layer.
I have an ongoing project utilizing ANN, I want to know how to measure the accuracy in terms of percentage. Thank you
Dear collegues,
I try to a neural network.I normalized data with the minimum and maximum:
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
and the results:
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result).
So I can see the actual and predicted values only in normalized form.
Could you tell me please,how do I scale the real and predicted data back into the "unscaled" range?
P.s. minvec <- sapply(mydata,min)maxvec <- sapply(mydata,max)
> denormalize <- function(x,minval,maxval) {
+ x*(maxval-minval) + minval
doesn't work correct in my case.
Thanks a lot for your answers
Suppose a smart meter is connected to the mainline of the network. Would it be wise to say that the data captured through this meter can be used for fault location in the sub-lateral branches of the line in the same network?
Attached is the figure. The SM is connected to the mainline from where lateral branches go to loads and other sources etc. Suppose, at t = 4 ; fault 3 occurs while the rest of the sections are healthy, what could be the possible approach to locate fault 3 if we have only one meter connected at main bus (line).

According to which article, research or reference, in learning process of neural networks, 70% of the dataset is usually considered for training and 30% of its for testing?
In other words, who in the world and in which research or book for the first time raised this issue and explained it in detail?
I desperately need a reference to the above.
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
I'm searching about autoencoders and their application in machine learning issues. But I have a fundamental question.
As we all know, there are various types of autoencoders, such as Stack Autoencoder, Sparse Autoencoder, Denoising Autoencoder, Adversarial Autoencoder, Convolutional Autoencoder, Semi- Autoencoder, Dual Autoencoder, Contractive Autoencoder, and others that are better versions of what we had before. Autoencoder is also known to be used in Graph Networks (GN), Recommender Systems(RS), Natural Language Processing (NLP), and Machine Vision (CV). This is my main concern:
Because the input and structure of each of these machine learning problems are different, which version of Autoencoder is appropriate for which machine learning problem.
I used Matlab functions in order to train NARX model. When i use Levenberg-Marquardt as training algorithm results come good than Bayesian regularization and Scaled conjugate gradient is the worst one . I need to know why LM perform beter than BR Although BR used LM optimization to update network weights and bias ?