Science topic
Artificial Neural Networks - Science topic
Explore the latest questions and answers in Artificial Neural Networks, and find Artificial Neural Networks experts.
Questions related to Artificial Neural Networks
Any recommendations for user-friendly platforms suitable for research purposes would be greatly appreciated!
What should be the scale of regulation of the development and application of artificial intelligence technology to ensure that it is safe, sustainable and ethical, but also that it does not limit the scale of innovation and entrepreneurship? Does the development of technologies such as artificial intelligence, biotechnology and other Industry 4.0/5.0 technologies bring new opportunities, as well as ethical challenges that require reflection and regulation? How should new technologies, including artificial intelligence, be developed to ensure that they are safe, sustainable and ethical, and that they generate far more benefits and new development opportunities instead of negative effects and potential risks? How should the development and application of artificial intelligence be regulated to ensure safe, sustainable and ethical development, but also to ensure that innovation and entrepreneurship are not restricted?
The research I am conducting shows that the development of technologies such as artificial intelligence (AI) and biotechnology is a double-edged sword. On the one hand, it offers enormous possibilities, on the other hand, it brings with it serious ethical dilemmas. AI, which is becoming increasingly advanced, raises questions about responsibility for its decisions and potential algorithmic discrimination. Biotechnology, on the other hand, raises concerns about safety and social inequality due to the possibility of genetic modification. It is crucial to engage in a broad dialogue on the ethical aspects of technological development, involving scientists, ethicists, lawyers, politicians and society. This dialogue should lead to the creation of a legal and regulatory framework that takes into account the ethical implications of new technologies and protects human rights. Research plays an important role in addressing these issues by analysing the impact of technology on society and developing recommendations for regulation.
The results of many studies confirm the thesis that the development of artificial intelligence carries enormous potential, but also challenges. It is crucial to find the right level of regulation to ensure the safe and ethical development of this technology without hindering innovation and entrepreneurship. Regulations should be based on scientific evidence, take into account the diversity of AI applications and be flexible to keep up with technological progress. It is necessary to create a legal and ethical framework that will regulate the development and application of AI, taking into account responsibility, transparency, security, ethics and privacy. The process of creating regulations should involve scientists, engineers, ethicists, lawyers, politicians and civil society. Scientific research plays an important role in identifying problems and developing effective regulatory strategies.
I have described the key issues of the opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please reply,
I invite everyone to the discussion,
Thank you very much,
Best regards,
I invite you to scientific cooperation,
Dariusz Prokopowicz

I have been working with artificial neural networks since 1986. It seemed to me that we were striving to reach the level of insects. And I do not quite understand some modern statements on this topic. Let us recall the known facts: 1. If we consider the human brain as an analogue of an artificial neural network, then it cannot be written into the genome: at least one and a half million genomes are needed. 2. This is such a large neural network that it cannot be taught to the limit of learning not only in one life, but also in many lives. In this regard, the question: is there a developer of so-called artificial intelligence who seriously and not for business believes that his creation will soon surpass man, and is ready to swear this on the Bible, the Koran, and the Torah?
As is well known, this year's Nobel Prize in Physics went to two AI researchers for their discoveries that enable machine learning with artificial neural networks. This raises the question to what extent this topic has anything to do with physics. Rather, the impression arises that the Nobel Prize in Physics was misused for a topic that would just as well fit into mathematics or biology. I therefore propose the creation of a new Nobel Prize for informatics. This could be endowed with Bitcoin.
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be referred to as sustainable, pro-climate, pro-environment, green, etc.?
Advanced analytical systems, including complex forecasting models that enable multi-criteria, highly sophisticated, big data and information processing-based forecasts of the development of multi-faceted climatic, natural, social, economic and other processes are increasingly based on new Industry 4.0/5.0 technologies, including Big Data Analytics and machine learning, deep learning and generative artificial intelligence. The use of generative artificial intelligence technologies enables the application of complex data processing algorithms according to precisely defined assumptions and human-defined factors. The use of computerized, integrated business intelligence information systems allows real-time analysis on the basis of continuously updated data provided and the generation of reports, reports, expert opinions in accordance with the defined formulas for such studies. The use of digital twin technology allows computers to build simulations of complex, multi-faceted, prognosticated processes in accordance with defined scenarios of the potential possibility of these processes occurring in the future. In this regard, it is also important to determine the probability of occurrence in the future of several different defined and characterized scenarios of developments, specific processes, phenomena, etc. In this regard, Business Intelligence analytics should also make it possible to precisely determine the level of probability of the occurrence of a certain phenomenon, the operation of a process, the appearance of described effects, including those classified as opportunities and threats to the future development of the situation. Besides, Business Intelligence analytics should enable precise quantitative estimation of the scale of influence of positive and negative effects of the operation of certain processes, as well as factors acting on these processes and determinants conditioning the realization of certain scenarios of situation development. Cloud computing makes it possible, on the one hand, to update the database with new data and information from various institutions, think tanks, research institutes, companies and enterprises operating within a selected sector or industry of the economy, and, on the other hand, to enable simultaneous use of a database updated in this way by many beneficiaries, many business entities and/or, for example, also by many Internet users in a situation where the said database would be made available on the Internet. In a situation where Internet of Things technology is applied, it would be possible to access the said database from the level of various types of devices equipped with Internet access. The application of Blockchain technology makes it possible to increase the scale of cybersecurity of the transfer of data sent to the database and Big Data information as part of the updating of the collected data and as part of the use of the analytical system thus built by external entities. The use of machine learning and/or deep learning technologies in conjunction with artificial neural networks makes it possible to train an AI-based system to perform multi-criteria analysis, build multi-criteria simulation models, etc. in the way a human would. In order for such complex analytical systems that process large amounts of data and information to work efficiently it is a good solution to use state-of-the-art super quantum computers characterized by high computing power to process huge amounts of data in a short time. A center for multi-criteria analysis of large data sets built in this way can occupy quite a large floor space equipped with many servers. Due to the necessary cooling and ventilation system and security considerations, this kind of server room can be built underground. while due to the large amounts of electricity absorbed by this kind of big data analytics center, it is a good solution to build a power plant nearby to supply power to the said data center. If this kind of data analytics center is to be described as sustainable, in line with the trends of sustainable development and green transformation of the economy, so the power plant powering the data analytics center should generate electricity from renewable energy sources, e.g. from photovoltaic panels, windmills and/or other renewable and emission-free energy sources of such a situation, i.e., when a data analytics center that processes multi-criteria Big Data and Big Data Analytics information is powered by renewable and emission-free energy sources then it can be described as sustainable, pro-climate, pro-environment, green, etc. Besides, when the Big Data Analytics analytics center is equipped with advanced generative artificial intelligence technology and is powered by renewable and emission-free energy sources then the AI technology used can also be described as sustainable, pro-climate, pro-environment, green, etc. On the other hand, the Big Data Analytics center can be used to conduct multi-criteria analysis and build multi-faceted simulations of complex climatic, natural, economic, social processes, etc. with the aim of, for example. to develop scenarios of future development of processes observed up to now, to create simulations of continuation in the future of diagnosed historical trends, to develop different variants of scenarios of situation development according to the occurrence of certain determinants, to determine the probability of occurrence of said determinants, to estimate the scale of influence of external factors, the scale of potential materialization of certain categories of risk, the possibility of the occurrence of certain opportunities and threats, estimation of the level of probability of materialization of the various variants of scenarios, in which the potential continuation of the diagnosed trends was characterized for the processes under study, including the processes of sustainable development, green transformation of the economy, implementation of sustainable development goals, etc. Accordingly, the data analytical center built in this way can, on the one hand, be described as sustainable, since it is powered by renewable and emission-free energy sources. In addition to this, the data analytical center can also be helpful in building simulations of complex multi-criteria processes, including the continuation of certain trends of determinants influencing the said processes and the factors co-creating them, which concern the potential development of sustainable processes, e.g. economic, i.e. concerning sustainable economic development. Therefore, the data analytical center built in this way can be helpful, for example, in developing a complex, multifactor simulation of the progressive global warming process in subsequent years, the occurrence in the future of the negative effects of the deepening scale of climate change, the negative impact of these processes on the economy, but also to forecast and develop simulations of the future process of carrying out a pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero-growth and closed-loop economy. So, the sustainable data analytical center built in this way will be able to be defined as sustainable due to the supply of renewable and zero-carbon energy sources, but will also be helpful in developing simulations of future processes of green transformation of the economy carried out according to certain assumptions, defined determinants, estimated probability of occurrence of certain impact factors and conditions, etc. orz estimating costs, gains and losses, opportunities and threats, identifying risk factors, particular categories of risks and estimating the feasibility of the defined scenarios of the green transformation of the economy planned to be implemented. In this way, a sustainable data analytical center can also be of great help in the smooth and rapid implementation of the green transformation of the economy.
Kluczowe kwestie dotyczące problematyki zielonej transformacji gospodarki opisałem w poniższym artykule:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
Zastosowania technologii Big Data w analizie sentymentu, analityce biznesowej i zarządzaniu ryzykiem opisałem w artykule mego współautorstwa:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be described as sustainable, pro-climate, pro-environment, green, etc.?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 and RES technologies?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Anyone is working on Artificial Neural Networks (ANN) in research?? I am willing to learn about it is there any platform/workshop/course for free regarding the same!
Also I’m looking for this for a chemistry point of view
Thanks and Regards
How do we evaluate the importance of individual features for a specific property using ML algorithms (say using GBR) and construct an optimal features set for our problem.
image taken from: 10.1038/s41467-018-05761-w
Forecasting using ANN for a single variable. Say, inflation.
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
Perhaps in the future - as a result of the rapid technological advances currently taking place and the rivalry of leading technology companies developing AI technologies - a general artificial intelligence (AGI) will emerge. At present, there are unresolved deliberations on the question of new opportunities and threats that may occur as a result of the construction and development of general artificial intelligence in the future. The rapid technological progress currently taking place in the field of generative artificial intelligence in connection with the already high level of competition among technology companies developing these technologies may lead to the emergence of a super artificial intelligence, a strong general artificial intelligence that can achieve the capabilities of self-development, self-improvement and perhaps also autonomy, independence from humans. This kind of scenario may lead to a situation where this kind of strong, super AI or general artificial intelligence is out of human control. Perhaps this kind of strong, super, general artificial intelligence will be able, as a result of self-improvement, to reach a state that can be called artificial consciousness. On the one hand, new possibilities can be associated with the emergence of this kind of strong, super, general artificial intelligence, including perhaps new possibilities for solving the key problems of the development of human civilization. However, on the other hand, one should not forget about the potential dangers if this kind of strong, super, general artificial intelligence in its autonomous development and self-improvement independent of man were to get completely out of the control of man. Probably, whether this will involve mainly new opportunities or rather new dangers for mankind will mainly be determined by how man will direct this development of AI technology while he still has control over this development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
If general artificial intelligence (AGI) is created, will it involve mainly new opportunities or rather new threats for humanity?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Recently, accelerated technological progress is being made, including the development of generative artificial intelligence technology. The aforementioned technological progress made in the improvement and implementation of ICT information technologies, including the development of applications of tools based on generative artificial intelligence is becoming a symptom of the transition of civilization to the next technological revolution, i.e. the transition from the phase of development of technologies typical of Industry 4.0 to Industry 5.0. Generative artificial intelligence technologies are finding more and more new applications by combining them with previously developed technologies, i.e. Big Data Analytics, Data Science, Cloud Computing, Personal and Industrial Internet of Things, Business Intelligence, Autonomous Robots, Horizontal and Vertical Data System Integration, Multi-Criteria Simulation Models, Digital Twins, Additive Manufacturing, Blockchain, Smart Technologies, Cyber Security Instruments, Virtual and Augmented Reality and other Advanced Data Mining technologies. In addition to this, the rapid development of generative AI-based tools available on the Internet is due to the fact that more and more companies, enterprises and institutions are creating their chatbots, which have been taught specific skills previously performed only by humans. In the process of deep learning, which uses artificial neural network technologies modeled on human neurons, the created chatbots or other tools based on generative AI are increasingly taking over from humans to perform specific tasks or improve their performance. The main factor in the growing scale of applications of various tools based on generative AI in various spheres of business activities of companies and enterprises is due to the great opportunities to automate complex, multi-criteria, organizationally advanced processes and reduce the operating costs of carrying them out with the use of AI technologies. On the other hand, certain risks may be associated with the application of AI generative technology in business entities, financial and public institutions. Among the potential risks are the replacement of people in various jobs by autonomous robots equipped with generative AI technology, the increase in the scale of cybercrime carried out with the use of AI, the increase in the scale of disinformation and generation of fake news on online social media through the generation of crafted photos, texts, videos, graphics presenting fictional content, non-existent events, based on statements and theses that are not supported by facts and created with the use of tools available on the Internet, applications equipped with generative AI technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Will there be mainly opportunities or rather threats associated with the development of artificial intelligence applications?
I am conducting research in this area. Particularly relevant issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Recent advances in spiking neural networks (SNNs), standing as the next generation of artificial neural networks, have demonstrated clear computational benefits over traditional frame- or image-based neural networks. In contrast to more traditional artificial neural networks (ANNs), SNNs propagate spikes, i.e., sparse binary signals, in an asynchronous fashion. Using more sophisticated neuron models, such brain-inspired architectures can in principle offer more efficient and compact processing pipelines, leading to faster decision-making using low computational and power resources, thanks to the sparse nature of the spikes. A promising research avenue is the combination of SNNs with event cameras (or neuromorphic cameras), a new imaging modality enabling low-cost imaging at high speed. Event cameras are also bio-inspired sensors, recording only temporal changes in intensity. This generally reduces drastically the amount of data recorded and, in turn, can provide higher frame rates, as most static or background objects (when seen by the camera) can be discarded. Typical applications of this technology include detection and tracking of high-speed objects, surveillance, and imaging and sensing from highly dynamic platforms.
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I have read few articles that used SPSS for Artificial Neural Network analysis with survey data. What is your opinion about the user friendliness of SPSS in this regard? Do you refer any other software package?
Can the applicability of Big Data Analytics backed by artificial intelligence technology in the field be significantly enhanced when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and executed by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the processing that takes place in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, the digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets obtained from the Internet and realized by the most powerful quantum computers?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I now that alot of artificial networks has appeared now. And may be soon we wil not read articles and do our scientific works and AI will help us. May be it is happening now? Wat is your experience working with AI and neural networks in science?
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz

If neural networks adopt the principle of deep learning, why haven't they been able to create their own language for communication today?
What areas of application of artificial neural networks in information technology are now the most promising (except for pattern recognition, chatbots)? Probably, AI applications in Big Data, Data Science, various kinds of forecasting (for example, time series)? I consider these areas (Big Data, Data Science) important, because after the inevitable obsolescence of modern artificial neural networks, these applications will still remain relevant, only new technologies will work in them. Big data in the world will not disappear anywhere and will always need to be processed - with any available technology.
Forbes.ru has the following article: Applications of Artificial Intelligence Across Various Industries (https://www.forbes.com/sites/qai/2023/01/06/applications-of-artificial-intelligence/).
Could you give me some advices, please? Is there any method to determine the number of hidden layers and hidden nodes required to produce a good accuracy of Artificial Neural Networks especially Deep Learning? I will be so glad if you could answer or give me a reference link about this. Thank you in advance.
Artificial neural networks are considered one of the most important and most advanced sciences at the present time, and they have many applications in various sciences. They also have their own specialized experts.
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Hi. I'm generating the amount of data required for training an artificial neural network (ANN) using a reliable and validated self-developed numerical code. Is it right approach?
Or the necessary data should be produced only with experimental tests ?
Best Regards
Saeed
optimisation et prédiction du ratio KWH/m3 dans une station de pompage par la modélisation mathématique régression linéaire multiple RLM et réseaux de neurones artificiels RNA
Currently, some researchers count on AI programs to enhance their problem solving and to get the favorable results, especially at multi-parameter preparations research... Artificial neural network, Central composite design, Response surface methodology are some helpful examples that can provide researchers by the strong options and choices for each criterion to optimal consequences. So, we strongly recommend using them to investigate on how some conventional extraction methods can be worthwhile.
How can I calculate ANOVA table for the quadratic model by Python?
I want to calculathe a table like the one I uploaded in the image.

How to write the python script for Strength of Double Skin steel concrete composite wall using Artificial Neural Network. I have attached the figure for your reference.

Hi everyone
I used the Neural Network in MATLAB using inputs and target data. How can I create an equation that correctly estimates the target??
(Based on the ANN created, weights, biases, and related inputs)
Is there a method, tool, or idea to solve this issue?
How to write the python script for Strength of Concrete using Artificial Neural Network in matlab?
In this digital world, with increasing digital devices and data, security is a significant concern. And most cases DOS/DDoS/EDoS attacks are performed by the botnet. I want to do research to detect and prevent botnets. Can you share an efficient research title to detect and prevent botnets?
For time-series forecasting, I'm using an LSTM network. Are there any metrics that could be used to evaluate the forecasting model's generalization throughout the training phase, i.e., whether it is neither overfitting nor underfitting? To check that the network is not overfitted, for instance, we can look at both the training loss and validation loss curves. Can such overfitting or underfitting be detected using any tables?
How might artificial neural networks influence the development of microbiology? It will be interesting to gain your thoughts on potential future insights and technologies.
I have computed an artifical neural network analysis using Multilayer Perceptron method in SPSS. From the output of this model, i can see the weights evaluated at hidden layers. However, i do not know how to write down the actual equation using these weights as like an equation that we obtain from linear regression model using the parameter estimates. Please help me for this.
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.

Hi all,
I'm working on hand gesture recognition, I have worked on LS-SVM, and now I'm working on ANN-based hand gesture recognition, just need to find a potential research gap, and the road map. Any suggestions?
I'm using GRU to forecast the next day's temperature with a minute resolution; the model will take today's minutes as input and produce tomorrow's temperature outputs; the code I'm using is below. My issue is whether I should use 1440 units with the dense layer since I forecast 1440 time steps or simply one unit because I predict only one variable.
model.add(layers.GRU(200,activation='tanh', recurrent_activation='sigmoid', input_shape=(1440,1)))
model.add(layers.Dropout(rate=0.1))
model.add(layers.Dense(1440))
model.compile(loss='mse',optimizer='adam')
For my undergraduate thesis, I want to work on, "The application of Artificial Neural Network -(ANN) in the development of a SCM distribution network". Want some opinion about the trend/future of this topic & some suggestions as a beginner.
Can any one please share some good articles related to these two topics.
- Machine learning for Agri-Food 4.0 development,
- Artificial neural networks for Agri-Food 4.0 analysis,
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
Can you give me advice, how to solve constrained problems with ANNs? I can name two common scenarios where this would be benefiting both the accuracy and learning performance
- Predict physical values that are solely non-negative (pressure, concentration, mass)
- Predict state of system with oblivious limitations - e.g., volumetric mixture of components cannot have total % sum higher than 100 %
So, my question is how I should work in such cases? I prefer MATLAB, but if it is not possible with any of its toolboxes, I'm also open for other recommendations.
Edit: Just to clarify, the question is not about creating and training of ANN itself. I need to know how to implement linear constraint function to output, something like reinforced learning.
I want data sets of blood and bone cancer.I want to sequence it in the python by the use of Artificial Neural Networks
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Is there an index that includes the mixing index and pressure drop for micromixers?
For example, for heat exchangers and heat sinks, there is an index that includes heat transfer performance and hydraulic performance, which is presented below:
η=(Nu/Nub)/(f/fb)1/3
The purpose of these indexes is to create a general criterion to check the devices' overall performance and investigate the parameters' impact.
Is there an index that includes mixing performance and hydraulic performance for micromixers?
Dear Scholars,
I would like to solve a Fluid Mechanic optimization problem that requires the implementation of an optimization algorithm together with Artificial Neural network. I had some questions about Convex optimization algorithm and I would appreciate it if you could give some advice to me.
My question is about possibility of implementing of the Convex optimization together with Artificial Neural Network to find a unique solution for a multi-objective optimization problem. The optimization problem that I am trying to code is explained as the following equations. The objective function utilized in the optimization problem is defined in the following equation:
📷
Where OF is the objective function, wi are the weights assigned to each cost function, Oi is the ith cost function defined as the relative difference between the experimental and the predicted evaporation metrics for fuel droplet (denoted by superscript of exp and mdl, respectively), k is the number of cost functions (k = 4, equal to the number of evaporation metrics), c [c1, c2, and c3] is the mass fraction vector defining the blend of three components, subjected to the following constrains:
📷
Due to high CPU time required for modeling and calculating the objective functions (OF), an ANN was trained based on some tabulated data from modeling of fuel droplet evaporation and used for calculating the OF through optimization iteration.
In the same manner, the wi values are subjected to optimization during the minimization of OF, with the following constraints:
📷
It worth mentioning that, I have solved this problem by employing Genetic Algorithm together with ANN. Although, the iterative process for solving the problem converged to acceptable solutions. But, the algorithm did not return a unique solution.
Regarding that, I would ask you about possibility of using Convex optimization algorithm together with ANN for solving the aforementioned problem to achieve a unique solution. In case of such feasibility, I would appreciate it if you could mention some relevant publications.

i want to model my adsorption data by ANN
I'm doing a project to detect signs of Alzheimer's related macular degeneration, for which I would require a dataset of healthy and AD retinal images (ideally also in different stages of the disease), any suggestions of pre-existing datasets or how I might go about cobbling one together? Size and quality of the dataset aren't super high priority as it's a small POC.
Hi everyone,
I have collected a set of experimental data regarding the strength of a composite material. Besides quantitative data (dimensions and mechanical properties of the materials), linguistic variables, such as the type of composite material, are also included in data as the parameters affecting the material strength. I am trying to use ANN/ANFIS to predict the strength based on the mentioned variables. How is it possible to train a neural system with linguistic inputs included?
Any comments are appreciated.
Regards,
Keyvan
Hello,
I want to do work on the artificial neural networking model and the implement for environmental parameters. But I am stuck in the model testing phase. I want to do this work in the R statistical software. I installed everything Keras, TensorFlow, but I am not able to interpreter the result from the analysis. If anybody know about the procedure how to test the model, how to interpret, please help. with any other useful software advice also welcome.
I have built a regression artificial neural network model. However, whenever I try to retrain the model I get a wide range of results in R% accuracy and also MSE. I got 97% of accuracy and also 10%. I used neural fitting application in MatLab 2021a and the levenberg-marquardt algorithm to train my model.
I know that different results happen due to different partitioning in Data sets( Learning data set, Validationset, and test set) but how can I get reliable results?
My data set size is 100, I can't make it bigger due to lack of proper experimental Data.
A multilayer perceptron (MLP) is a class of feedforward artificial neural networks (ANN), but in which cases are MLP considered a deep learning method?
Please Who can help me how can i plot Artificial Neural Networks with multiple inputs and outputs using matlab?
I need to visualize my multi-objective optimization case. the fig is attatched as follow.

I have been doing research on different issues in the Finance and Accounting discipline for about 5 years. It becomes difficult for me to find some topics which may lead me to do projects, a series of research articles, working papers in the next 5-10 years. There are few journals which have updated research articles in line with the current and future research demand. Therefore, I am looking for such journal(s) that can help me as a guide to design research project that can contribute in the next 5-10 years.
i am working on noise level prediction, all needed data has been collected. the permissible exposure limit and recommended exposure limit noise level has also been determined. i want to predict the noise level for the next 10-20 years with Artificial neural network model, please i need help. thanks
In his name is the judge
Hi
In order to design controler for my damper ( wich is tlcgd), i want to use fuzzy system.
So i have to optimize rules for fuzzy controler. i want to know for optimizition rules of fuzzy systems wich one is the best genetiz algorithm or Artificial neural network?
Wish you best
Take refuge in the right.
I am searching for some algorithms for feature extraction from images which I want to classify using machine learning. I have heard only about SIFT, I have images of buildings and flowers to classify. Other than SIFT, what are some good algorithms.
Spectroscopy is said be easier and cheaper for soil chemical property analysis. how well does it perform in mineralogical studies? also how well does the data set calibration and validation tests yields any relevant results through machine learning and artificial neural network in this field?
I basically belong to non programming background, I do know moderate application of R-Studio in PLRS and basic training set and validation set preparation.
Codes for artificial Neural networks
Does any one have R code for Statistical downscaling of GCMs using Artificial Neural network? I need this R code for my studying
Any short introductory document from image domain please.
Good day,
My name is Philips Sanni. I am currently rounding up my MSc degree in Software Engineering and am currently in search of a university where I can study for a Ph.D.in a related field but most preferably in the area of Artificial Neural Networks.
And if you are a professor in a need of a Doctoral Student, kindly send the details of your research and how I can apply to your university.
Visualization of approximation function learned through ANN for a regression problem.
The ANN has 5 hidden layers with 20 neurons in each layer.
I have an ongoing project utilizing ANN, I want to know how to measure the accuracy in terms of percentage. Thank you
Dear collegues,
I try to a neural network.I normalized data with the minimum and maximum:
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
and the results:
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result).
So I can see the actual and predicted values only in normalized form.
Could you tell me please,how do I scale the real and predicted data back into the "unscaled" range?
P.s. minvec <- sapply(mydata,min)maxvec <- sapply(mydata,max)
> denormalize <- function(x,minval,maxval) {
+ x*(maxval-minval) + minval
doesn't work correct in my case.
Thanks a lot for your answers
Suppose a smart meter is connected to the mainline of the network. Would it be wise to say that the data captured through this meter can be used for fault location in the sub-lateral branches of the line in the same network?
Attached is the figure. The SM is connected to the mainline from where lateral branches go to loads and other sources etc. Suppose, at t = 4 ; fault 3 occurs while the rest of the sections are healthy, what could be the possible approach to locate fault 3 if we have only one meter connected at main bus (line).

According to which article, research or reference, in learning process of neural networks, 70% of the dataset is usually considered for training and 30% of its for testing?
In other words, who in the world and in which research or book for the first time raised this issue and explained it in detail?
I desperately need a reference to the above.
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
I'm searching about autoencoders and their application in machine learning issues. But I have a fundamental question.
As we all know, there are various types of autoencoders, such as Stack Autoencoder, Sparse Autoencoder, Denoising Autoencoder, Adversarial Autoencoder, Convolutional Autoencoder, Semi- Autoencoder, Dual Autoencoder, Contractive Autoencoder, and others that are better versions of what we had before. Autoencoder is also known to be used in Graph Networks (GN), Recommender Systems(RS), Natural Language Processing (NLP), and Machine Vision (CV). This is my main concern:
Because the input and structure of each of these machine learning problems are different, which version of Autoencoder is appropriate for which machine learning problem.
I used Matlab functions in order to train NARX model. When i use Levenberg-Marquardt as training algorithm results come good than Bayesian regularization and Scaled conjugate gradient is the worst one . I need to know why LM perform beter than BR Although BR used LM optimization to update network weights and bias ?
I'm interested to compare multiple linear regression and artificial neural networks in predicting the production potential of animals using certain dependent variables. However, i have obtained negative R squared values at ceratin model architecture. please explain the reason for neagative prediciton accuracy or R square values.
What is the new development in the LSTM model that also combines layers with LSTM?
Hi,
I would like to ask where I can get a dataset for an Artificial Neural Network I would like to build for the prediction of biogas yield and methane content. Ideally, I will require at least 500 cases.
I have searched many articles on databases, but so far I have been able to download only 1 dataset from the article (Modeling of biogas production from food, fruits and vegetables wastes using artificial neural network (ANN), Goncalves, et al, 2021)
The input data for my ANN will be: OLR (g VS/l.d), HRT, Temperature, pH, Reactor volume and the expected outputs are biogas yield (L/(g VS) and methane content (%).
Can someone please share some information as to where I can get a dataset for my project or if anyone would like to share the data from their experiments, it will be much appreciated.
Regards
W
Has anyone experience with the software NeuralDesigner?
Can anyone recommend this software oder recommend another software?
I have to perform classification and prediction tasks with different type of data (position data, time values, nominal data, ...).
Thanks in advance!
I intend to study to predict stock exchange index, but confused between RBF and MLP, So I want to know by which type is the more suitable for this purpose
In neural networks, I have understood that the activation function at the Hidden Layer make the inputs in specific range like (0, 1) or (-1, 1), and do solve the nonlinear problems, But what does it do in the Output layer, Can get simply explanation, because I'm not specialist
I'm Finance and Banking student, have research about predicting the stock price using neural networks, tried to learn Matlab software, but had difficulty with the use it because it contains many features like NAR, NARX, which I didn't understand, So I want to guide me to the best software among these above, and what the differences between them
Dear all,
I have prepared a data set and I am seeking to apply my data on a multi-input multi-output algorithm. I appreciate it if you let me know how can I download such algorithms? I mean I have not come across any particular website that offers these types of ANN algorithms.
FYI, I am trying to tune an array of inputs onto an array of targets. So, please help me out with my query as declared above.
Thanks
I have implemented a quantized neural network for classification purposes. I wonder is there any Inverse quantization approach like inverse transform for prediction data?
I appreciate your answers.
The conventional PID controller does not give me satisfactory closed-loop time response characteristics of my plant. Hence, I quest for a novel artificial neural network algorithm that can optimize the tunned PID gains. I think any variant of the Rprop implemented in MATLAB/Simulink will not be a bad idea of a starting point. Also, I need help with such an algorithm in MATLAB/Simulink files to fast-track this feat. Then I can integrate it into a set-point feedback scheme for simulation.
Hi,
I am using CNN for flood susceptibility mapping and would like to find the feature contribution using Shap, I used the test data set to calculate shap_values but I am getting the following error:
"ValueError: shape mismatch: objects cannot be broadcast to a single shape"
and here is my code.
e = shap.DeepExplainer(model, X_test)
shap_values = e.shap_values(X_test)
shap.summary_plot(shap_values[0], X_test, plot_type="bar")
X_test is an array with dimensions (787, 23, 23, 11) and shap_values [0] is also an array with the same dimensions
787 image, image size is 23 X 23 and has 11 bands.
I would be grateful if someone can help.
Artificial Neural Network (ANN)
In psychology, attention is the cognitive process of selectively concentrating on one or a few things while ignoring others. A neural network is considered to be an effort to mimic human brain actions in a simplified manner. Attention Mechanism is also an attempt to implement the same action of selectively concentrating on a few relevant things while ignoring others in deep neural networks. So, is the attention mechanism able to make an artificial neural network focus on a specific target?
I have published below 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints:
1) AI++ : Artificial Intelligence Plus Plus
2) Artificial Excellence - A New Branch of Artificial Intelligence
and 18 others.
You can find 20 articles at below Elsevier SSRN pre-prints link:
Does publishing 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints equivalent to publishing 20 BREAKTHROUGH articles in ELSEVIER Journals?
Please kindly explain me your answer.
U-net Neural Networks are similar to GAN and consist of a contracting path and an expansive path.
In this regard can we consider that U-Net a part of Generative Adversarial Networks (GAN) Architecture? Can be GANs use for Segmentation pictures similar to U-Net?
Can anyone please recommend to me some high-quality articles on utilizing artificial neural networks (ANN) to model PV/T systems? Please also recommend me some MATLAB and Python code/equations for calculating the exit temperature.
I need to calculate the accuracy, precision, recall, specificity, and F1 score for my Mask-RCNN model. Hence I hope to calculate the confusion matrix to the whole dataset first to get the TP, FP, TN, FN values. But I noticed almost all the available solutions out there for the calculation of confusion matrix, only outputs the TP, FP, and FN values. So how can I calculate metrics like accuracy and specificity which includes TN as a parameter?.Should I consider the TN as 0 during the calculations ?
I think that Generative Adversarial Networks can be used as Data Farming Means. What do you know about such an approach? Can you give another example of means for Data Farming?