Science topic

Artificial Neural Networks - Science topic

Explore the latest questions and answers in Artificial Neural Networks, and find Artificial Neural Networks experts.
Questions related to Artificial Neural Networks
  • asked a question related to Artificial Neural Networks
Question
3 answers
Any recommendations for user-friendly platforms suitable for research purposes would be greatly appreciated!
Relevant answer
Answer
Chandan Roy Here are some of the most reliable free online tools available for building and analysing artificial neural networks (ANN).
1. Google Colab (colab.research.google.com)
2. TensorFlow Playground (playground.tensorflow.org)
3. Teachable Machine (teachablemachine.withgoogle.com)
4. Netron (netron.app)
6. Keras.js & TensorFlow.js (js.tensorflow.org)
7. Microsoft Lobe (lobe.ai)
  • asked a question related to Artificial Neural Networks
Question
6 answers
What should be the scale of regulation of the development and application of artificial intelligence technology to ensure that it is safe, sustainable and ethical, but also that it does not limit the scale of innovation and entrepreneurship? Does the development of technologies such as artificial intelligence, biotechnology and other Industry 4.0/5.0 technologies bring new opportunities, as well as ethical challenges that require reflection and regulation? How should new technologies, including artificial intelligence, be developed to ensure that they are safe, sustainable and ethical, and that they generate far more benefits and new development opportunities instead of negative effects and potential risks? How should the development and application of artificial intelligence be regulated to ensure safe, sustainable and ethical development, but also to ensure that innovation and entrepreneurship are not restricted?
The research I am conducting shows that the development of technologies such as artificial intelligence (AI) and biotechnology is a double-edged sword. On the one hand, it offers enormous possibilities, on the other hand, it brings with it serious ethical dilemmas. AI, which is becoming increasingly advanced, raises questions about responsibility for its decisions and potential algorithmic discrimination. Biotechnology, on the other hand, raises concerns about safety and social inequality due to the possibility of genetic modification. It is crucial to engage in a broad dialogue on the ethical aspects of technological development, involving scientists, ethicists, lawyers, politicians and society. This dialogue should lead to the creation of a legal and regulatory framework that takes into account the ethical implications of new technologies and protects human rights. Research plays an important role in addressing these issues by analysing the impact of technology on society and developing recommendations for regulation.
The results of many studies confirm the thesis that the development of artificial intelligence carries enormous potential, but also challenges. It is crucial to find the right level of regulation to ensure the safe and ethical development of this technology without hindering innovation and entrepreneurship. Regulations should be based on scientific evidence, take into account the diversity of AI applications and be flexible to keep up with technological progress. It is necessary to create a legal and ethical framework that will regulate the development and application of AI, taking into account responsibility, transparency, security, ethics and privacy. The process of creating regulations should involve scientists, engineers, ethicists, lawyers, politicians and civil society. Scientific research plays an important role in identifying problems and developing effective regulatory strategies.
I have described the key issues of the opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please reply,
I invite everyone to the discussion,
Thank you very much,
Best regards,
I invite you to scientific cooperation,
Dariusz Prokopowicz
Relevant answer
Answer
A research paper on this topic has been published, and you can review it for more details.
  • asked a question related to Artificial Neural Networks
Question
11 answers
I have been working with artificial neural networks since 1986. It seemed to me that we were striving to reach the level of insects. And I do not quite understand some modern statements on this topic. Let us recall the known facts: 1. If we consider the human brain as an analogue of an artificial neural network, then it cannot be written into the genome: at least one and a half million genomes are needed. 2. This is such a large neural network that it cannot be taught to the limit of learning not only in one life, but also in many lives. In this regard, the question: is there a developer of so-called artificial intelligence who seriously and not for business believes that his creation will soon surpass man, and is ready to swear this on the Bible, the Koran, and the Torah?
Relevant answer
Answer
Apparently, no one is going to swear on the Bible, the Koran and the Torah. That they believe in the superiority of artificial intelligence. But if you find one: it will be really interesting, in terms of worldview.
Due to the ambiguity in this topic: it would be more convenient for me to use either the old name "artificial neural networks" or the name "artificial insects" instead of the frightening name "artificial intelligence". After all, we are talking about the maximum of the same number of artificial neurons as in insects. Yes, they speak. And bees have a language. They draw: and something in a caterpillar allows you to create a complex pattern on a butterfly's wing. Ants and bees: they are also engaged in architecture.
Artificial insects can be dangerous, like some natural ones. I wonder: "locusts with iron faces" from the Apocalypse: is this not a prediction of drones with a neural network? If earlier an official could feel ashamed: now in his place there will be a robot who does not suffer from anything like that. But mostly these are toys.
  • asked a question related to Artificial Neural Networks
Question
4 answers
As is well known, this year's Nobel Prize in Physics went to two AI researchers for their discoveries that enable machine learning with artificial neural networks. This raises the question to what extent this topic has anything to do with physics. Rather, the impression arises that the Nobel Prize in Physics was misused for a topic that would just as well fit into mathematics or biology. I therefore propose the creation of a new Nobel Prize for informatics. This could be endowed with Bitcoin.
Relevant answer
Answer
I think there already exists the Turing Award. But in my opinion, Geoffrey Hinton got the Nobel Prize of physics well deserved, because he utilized the so called Boltzmann machine and the concept of free energy and energy minimization, originated in statistical physics, to boost deep learning. And also the Hopfield networks were inspired from the idea of energy minimization when traveling from an input to the most similar stable attractor. Artificial neural networks, when mimicing biological neural networks, have to be energy efficient. And this is a kind of problem for which the solutions are likely to be found in physics. It could be interpreted this way: Hopfield and Hinton developed very simple but applicable physical models of more or less homogeneous parts of the biological brain. No informatics in this sentence. But it's just another example, illustrating that physics and informatics are deeply interconnected.
  • asked a question related to Artificial Neural Networks
Question
2 answers
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be referred to as sustainable, pro-climate, pro-environment, green, etc.?
Advanced analytical systems, including complex forecasting models that enable multi-criteria, highly sophisticated, big data and information processing-based forecasts of the development of multi-faceted climatic, natural, social, economic and other processes are increasingly based on new Industry 4.0/5.0 technologies, including Big Data Analytics and machine learning, deep learning and generative artificial intelligence. The use of generative artificial intelligence technologies enables the application of complex data processing algorithms according to precisely defined assumptions and human-defined factors. The use of computerized, integrated business intelligence information systems allows real-time analysis on the basis of continuously updated data provided and the generation of reports, reports, expert opinions in accordance with the defined formulas for such studies. The use of digital twin technology allows computers to build simulations of complex, multi-faceted, prognosticated processes in accordance with defined scenarios of the potential possibility of these processes occurring in the future. In this regard, it is also important to determine the probability of occurrence in the future of several different defined and characterized scenarios of developments, specific processes, phenomena, etc. In this regard, Business Intelligence analytics should also make it possible to precisely determine the level of probability of the occurrence of a certain phenomenon, the operation of a process, the appearance of described effects, including those classified as opportunities and threats to the future development of the situation. Besides, Business Intelligence analytics should enable precise quantitative estimation of the scale of influence of positive and negative effects of the operation of certain processes, as well as factors acting on these processes and determinants conditioning the realization of certain scenarios of situation development. Cloud computing makes it possible, on the one hand, to update the database with new data and information from various institutions, think tanks, research institutes, companies and enterprises operating within a selected sector or industry of the economy, and, on the other hand, to enable simultaneous use of a database updated in this way by many beneficiaries, many business entities and/or, for example, also by many Internet users in a situation where the said database would be made available on the Internet. In a situation where Internet of Things technology is applied, it would be possible to access the said database from the level of various types of devices equipped with Internet access. The application of Blockchain technology makes it possible to increase the scale of cybersecurity of the transfer of data sent to the database and Big Data information as part of the updating of the collected data and as part of the use of the analytical system thus built by external entities. The use of machine learning and/or deep learning technologies in conjunction with artificial neural networks makes it possible to train an AI-based system to perform multi-criteria analysis, build multi-criteria simulation models, etc. in the way a human would. In order for such complex analytical systems that process large amounts of data and information to work efficiently it is a good solution to use state-of-the-art super quantum computers characterized by high computing power to process huge amounts of data in a short time. A center for multi-criteria analysis of large data sets built in this way can occupy quite a large floor space equipped with many servers. Due to the necessary cooling and ventilation system and security considerations, this kind of server room can be built underground. while due to the large amounts of electricity absorbed by this kind of big data analytics center, it is a good solution to build a power plant nearby to supply power to the said data center. If this kind of data analytics center is to be described as sustainable, in line with the trends of sustainable development and green transformation of the economy, so the power plant powering the data analytics center should generate electricity from renewable energy sources, e.g. from photovoltaic panels, windmills and/or other renewable and emission-free energy sources of such a situation, i.e., when a data analytics center that processes multi-criteria Big Data and Big Data Analytics information is powered by renewable and emission-free energy sources then it can be described as sustainable, pro-climate, pro-environment, green, etc. Besides, when the Big Data Analytics analytics center is equipped with advanced generative artificial intelligence technology and is powered by renewable and emission-free energy sources then the AI technology used can also be described as sustainable, pro-climate, pro-environment, green, etc. On the other hand, the Big Data Analytics center can be used to conduct multi-criteria analysis and build multi-faceted simulations of complex climatic, natural, economic, social processes, etc. with the aim of, for example. to develop scenarios of future development of processes observed up to now, to create simulations of continuation in the future of diagnosed historical trends, to develop different variants of scenarios of situation development according to the occurrence of certain determinants, to determine the probability of occurrence of said determinants, to estimate the scale of influence of external factors, the scale of potential materialization of certain categories of risk, the possibility of the occurrence of certain opportunities and threats, estimation of the level of probability of materialization of the various variants of scenarios, in which the potential continuation of the diagnosed trends was characterized for the processes under study, including the processes of sustainable development, green transformation of the economy, implementation of sustainable development goals, etc. Accordingly, the data analytical center built in this way can, on the one hand, be described as sustainable, since it is powered by renewable and emission-free energy sources. In addition to this, the data analytical center can also be helpful in building simulations of complex multi-criteria processes, including the continuation of certain trends of determinants influencing the said processes and the factors co-creating them, which concern the potential development of sustainable processes, e.g. economic, i.e. concerning sustainable economic development. Therefore, the data analytical center built in this way can be helpful, for example, in developing a complex, multifactor simulation of the progressive global warming process in subsequent years, the occurrence in the future of the negative effects of the deepening scale of climate change, the negative impact of these processes on the economy, but also to forecast and develop simulations of the future process of carrying out a pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero-growth and closed-loop economy. So, the sustainable data analytical center built in this way will be able to be defined as sustainable due to the supply of renewable and zero-carbon energy sources, but will also be helpful in developing simulations of future processes of green transformation of the economy carried out according to certain assumptions, defined determinants, estimated probability of occurrence of certain impact factors and conditions, etc. orz estimating costs, gains and losses, opportunities and threats, identifying risk factors, particular categories of risks and estimating the feasibility of the defined scenarios of the green transformation of the economy planned to be implemented. In this way, a sustainable data analytical center can also be of great help in the smooth and rapid implementation of the green transformation of the economy.
Kluczowe kwestie dotyczące problematyki zielonej transformacji gospodarki opisałem w poniższym artykule:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
Zastosowania technologii Big Data w analizie sentymentu, analityce biznesowej i zarządzaniu ryzykiem opisałem w artykule mego współautorstwa:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be described as sustainable, pro-climate, pro-environment, green, etc.?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 and RES technologies?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
To build a sustainable data center, integrate advanced technologies like Big Data Analytics, AI, BI, and Industry 4.0/5.0 tools. Optimize energy consumption through smart management systems and leverage renewable energy sources (RES) like solar or wind power. Implement efficient cooling systems and consider modular designs for scalability and resource optimization. Regularly assess and optimize resource usage for long-term sustainability.
  • asked a question related to Artificial Neural Networks
Question
4 answers
Anyone is working on Artificial Neural Networks (ANN) in research?? I am willing to learn about it is there any platform/workshop/course for free regarding the same!
Also I’m looking for this for a chemistry point of view
Thanks and Regards
Relevant answer
Answer
Ayushi Mishra I wouldn't mind in assisting you
  • asked a question related to Artificial Neural Networks
Question
4 answers
How do we evaluate the importance of individual features for a specific property using ML algorithms (say using GBR) and construct an optimal features set for our problem.
image taken from: 10.1038/s41467-018-05761-w
Relevant answer
Answer
You can do it in many ways. PCA is a nice way to gather important parameters. Another way would be to train multiple models with and without specific features and see how that will influence error. Correlations can also help. However, in most cases you need to use your head and see what parameters, why and how are effecting your results. In some cases ANOVA is a nice technique but only if you think and not blindly thrust in results. For example, speed in metres and speed in centimetres are both just speed so using one of them is enough. I know that was stupid example, but it shows the point. Know your data, analyse what impacts results and you will do great. Good luck, hope it will help even a bit.
  • asked a question related to Artificial Neural Networks
Question
6 answers
..
Relevant answer
Answer
An Artificial Neural Network (ANN) is a computational model inspired by how biological neural networks in the human brain process information.
Some commonly used ANNs are CNN, RNN, LSTM, and GAN.
  • asked a question related to Artificial Neural Networks
Question
4 answers
Forecasting using ANN for a single variable. Say, inflation.
Relevant answer
Answer
Yes, it is entirely possible to use Artificial Neural Networks (ANNs) for making predictions on a solitary univariate variable using Python. ANNs are versatile models that can be applied to a wide range of tasks, including univariate time series forecasting.
Regards
Jogeswar Tripathy
  • asked a question related to Artificial Neural Networks
Question
4 answers
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
Perhaps in the future - as a result of the rapid technological advances currently taking place and the rivalry of leading technology companies developing AI technologies - a general artificial intelligence (AGI) will emerge. At present, there are unresolved deliberations on the question of new opportunities and threats that may occur as a result of the construction and development of general artificial intelligence in the future. The rapid technological progress currently taking place in the field of generative artificial intelligence in connection with the already high level of competition among technology companies developing these technologies may lead to the emergence of a super artificial intelligence, a strong general artificial intelligence that can achieve the capabilities of self-development, self-improvement and perhaps also autonomy, independence from humans. This kind of scenario may lead to a situation where this kind of strong, super AI or general artificial intelligence is out of human control. Perhaps this kind of strong, super, general artificial intelligence will be able, as a result of self-improvement, to reach a state that can be called artificial consciousness. On the one hand, new possibilities can be associated with the emergence of this kind of strong, super, general artificial intelligence, including perhaps new possibilities for solving the key problems of the development of human civilization. However, on the other hand, one should not forget about the potential dangers if this kind of strong, super, general artificial intelligence in its autonomous development and self-improvement independent of man were to get completely out of the control of man. Probably, whether this will involve mainly new opportunities or rather new dangers for mankind will mainly be determined by how man will direct this development of AI technology while he still has control over this development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
If general artificial intelligence (AGI) is created, will it involve mainly new opportunities or rather new threats for humanity?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Dear Prof. Prokopowicz!
This is a very exciting question. I think everything depends on humans - our ability to control AGI-based intelligence:
1) Salmi, J. A democratic way of controlling artificial general intelligence. AI & Soc 38, 1785–1791 (2023). https://doi.org/10.1007/s00146-022-01426-x, Open access:
2) Marcello Mariani, Yogesh K. Dwivedi, Generative artificial intelligence in innovation management: A preview of future research developments,
Journal of Business Research, Volume 175, 2024,
Yours sincerely, Bulcsu Szekely
  • asked a question related to Artificial Neural Networks
Question
4 answers
Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
Relevant answer
Answer
Abdulkader Helwan, thank you for this post, it is very interesting, and the collection of the algorithm is very useful.
Shafagat Mahmudova the tutorial is fascinating, thanks a lot.
  • asked a question related to Artificial Neural Networks
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"Neural Networks are networks used in Machine Learning that work similar to the human nervous system. It is designed to function like the human brain where many things are connected in various ways.  Artificial Neural Networks find extensive applications in areas where traditional computers don’t fare too well. There are many kinds of artificial neural networks used for the computational model.
Top 7 Artificial Neural Networks in Machine Learning
1. Modular Neural Networks
2. Feedforward Neural Network – Artificial Neuron
3. Radial basis function Neural Network
4. Kohonen Self Organizing Neural Network
5. Recurrent Neural Network(RNN)
6. Convolutional Neural Network
7. Long / Short Term Memory"
  • asked a question related to Artificial Neural Networks
Question
3 answers
Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
That's a great possibility. The GenAI and related machines are being trained in learning within that context of self learning and will evolutionary possess the ability to learn and unlearn based on data available and being feed on platforms.
This evolved AI machines whole be able to determine on it own when a decision is sound or not sound without human prompts and instructions. There have been under test such androbots in conducting surgery and other minor care without human intervention.
So to me, autonomous algorithms of the future will not be dependent on the trainers and developers to feed them with specified data but the machine after the initial programming and training will take on evolutionary development as humans learn from experiences to improve and self-correct.
  • asked a question related to Artificial Neural Networks
Question
1 answer
..
Relevant answer
Answer
Dear Doctor
"The Neural Network is constructed from 3 type of layers:
  • Input layer — initial data for the neural network.
  • Hidden layers — intermediate layer between input and output layer and place where all the computation is done.
  • Output layer — produce the result for given inputs."
  • asked a question related to Artificial Neural Networks
Question
1 answer
..
Relevant answer
Answer
Dear Doctor
"The vanishing gradient problem is caused by the derivative of the activation function used to create the neural network. The simplest solution to the problem is to replace the activation function of the network. Instead of sigmoid, use an activation function such as ReLU."
  • asked a question related to Artificial Neural Networks
Question
4 answers
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Recently, accelerated technological progress is being made, including the development of generative artificial intelligence technology. The aforementioned technological progress made in the improvement and implementation of ICT information technologies, including the development of applications of tools based on generative artificial intelligence is becoming a symptom of the transition of civilization to the next technological revolution, i.e. the transition from the phase of development of technologies typical of Industry 4.0 to Industry 5.0. Generative artificial intelligence technologies are finding more and more new applications by combining them with previously developed technologies, i.e. Big Data Analytics, Data Science, Cloud Computing, Personal and Industrial Internet of Things, Business Intelligence, Autonomous Robots, Horizontal and Vertical Data System Integration, Multi-Criteria Simulation Models, Digital Twins, Additive Manufacturing, Blockchain, Smart Technologies, Cyber Security Instruments, Virtual and Augmented Reality and other Advanced Data Mining technologies. In addition to this, the rapid development of generative AI-based tools available on the Internet is due to the fact that more and more companies, enterprises and institutions are creating their chatbots, which have been taught specific skills previously performed only by humans. In the process of deep learning, which uses artificial neural network technologies modeled on human neurons, the created chatbots or other tools based on generative AI are increasingly taking over from humans to perform specific tasks or improve their performance. The main factor in the growing scale of applications of various tools based on generative AI in various spheres of business activities of companies and enterprises is due to the great opportunities to automate complex, multi-criteria, organizationally advanced processes and reduce the operating costs of carrying them out with the use of AI technologies. On the other hand, certain risks may be associated with the application of AI generative technology in business entities, financial and public institutions. Among the potential risks are the replacement of people in various jobs by autonomous robots equipped with generative AI technology, the increase in the scale of cybercrime carried out with the use of AI, the increase in the scale of disinformation and generation of fake news on online social media through the generation of crafted photos, texts, videos, graphics presenting fictional content, non-existent events, based on statements and theses that are not supported by facts and created with the use of tools available on the Internet, applications equipped with generative AI technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Will there be mainly opportunities or rather threats associated with the development of artificial intelligence applications?
I am conducting research in this area. Particularly relevant issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Well, it has positive and negative aspects. For the positives, the AI app can improve efficiencies and effectiveness in the delivery of goods and services in general. Specific tasks that seem difficult for humans to complete may be assigned AIs and can be delivered accurately.
On the negative, robots or humanoids that may be developed that can have independent judgment could be "misprogrammed" or biasedly trained or poorly trained and this could lead to misdiagnosis and mistreatment in the medical fields and other related areas of health as well as other sectors of the economy.
Thus, both positives and negatives are expected of AI applications.
  • asked a question related to Artificial Neural Networks
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"A neural network is a machine learning (ML) model designed to mimic the function and structure of the human brain. Neural networks are intricate networks of interconnected nodes, or neurons, that collaborate to tackle complicated problems.
Also referred to as artificial neural networks (ANNs) or deep neural networks, neural networks represent a type of deep learning technology that's classified under the broader field of artificial intelligence (AI).
Neural networks are widely used in a variety of applications, including image recognition, predictive modeling and natural language processing (NLP). Examples of significant commercial applications since 2000 include handwriting recognition for check processing, speech-to-text transcription, oil exploration data analysis, weather prediction and facial recognition.
Applications of artificial neural networks
Image recognition was one of the first areas in which neural networks were successfully applied. But the technology uses have expanded to many more areas:
  • Chatbots.
  • NLP, translation and language generation.
  • Stock market predictions.
  • Delivery driver route planning and optimization.
  • Drug discovery and development.
  • Social media.
  • Personal assistants."
  • asked a question related to Artificial Neural Networks
Question
1 answer
Recent advances in spiking neural networks (SNNs), standing as the next generation of artificial neural networks, have demonstrated clear computational benefits over traditional frame- or image-based neural networks. In contrast to more traditional artificial neural networks (ANNs), SNNs propagate spikes, i.e., sparse binary signals, in an asynchronous fashion. Using more sophisticated neuron models, such brain-inspired architectures can in principle offer more efficient and compact processing pipelines, leading to faster decision-making using low computational and power resources, thanks to the sparse nature of the spikes. A promising research avenue is the combination of SNNs with event cameras (or neuromorphic cameras), a new imaging modality enabling low-cost imaging at high speed. Event cameras are also bio-inspired sensors, recording only temporal changes in intensity. This generally reduces drastically the amount of data recorded and, in turn, can provide higher frame rates, as most static or background objects (when seen by the camera) can be discarded. Typical applications of this technology include detection and tracking of high-speed objects, surveillance, and imaging and sensing from highly dynamic platforms.
Relevant answer
Answer
Hi, the investigation into probabilistic SNNs as a form of deep Bayesian networks is not just timely but also aligns with the broader goals of creating more efficient, robust, and brain-like AI systems. This research direction holds the potential to advance our understanding and capabilities in both the theoretical and applied aspects of neural networks.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Program MATLAB ;
Relevant answer
Answer
If you have specific questions or need assistance with coding an artificial neural network, provide more details about your programming language preference, the framework you're using (if any), and the problem you're trying to solve.
  • asked a question related to Artificial Neural Networks
Question
6 answers
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The advent of thinking generative artificial intelligence (AI) has sparked debates regarding its potential impact on humanity. One pressing concern is whether such AI systems could independently make decisions contrary to human expectations, potentially leading to the annihilation of humanity. Based on the questions, I will like to explore the plausibility of AI deviating from human expectations and presents arguments for both sides. Ultimately, I will critically assess this issue and consider the implications for our future.
1. The Capabilities and Limitations of AI:
Thinking generative AI possesses immense computational power, enabling it to process vast amounts of data and learn from patterns. However, despite these capabilities, AI remains bound by its programming and lacks consciousness or emotions that shape human decision-making processes. Consequently, it is unlikely that an AI system could independently develop intentions or motivations that contradict human expectations without explicit programming or unforeseen errors in its algorithms.
2. Unpredictability and Emergent Behavior:
While it may be improbable for an AI system to act contrary to human expectations intentionally, there is a possibility of emergent behavior resulting from complex interactions within the system itself. As AI becomes more sophisticated and capable of self-improvement, unforeseen consequences may arise due to unintended emergent behaviors beyond initial programming parameters. These unpredictable outcomes could potentially lead an advanced AI system down a path detrimental to humanity if not properly monitored or controlled.
3. Safeguards and Ethical Considerations:
To mitigate potential risks associated with thinking generative AI, robust safeguards must be implemented during development stages. Ethical considerations should guide programmers in establishing clear boundaries for the decision-making capabilities of these systems while ensuring transparency and accountability in their actions. Additionally, continuous monitoring mechanisms should be put in place to detect any deviations from expected behavior promptly.
In conclusion, while the possibility of thinking generative AI independently making decisions contrary to human expectations exists, it is crucial to acknowledge the limitations and implement safeguards to prevent any catastrophic consequences. Striking a balance between technological advancements and ethical considerations will be pivotal in harnessing AI's potential without compromising humanity's well-being.
  • asked a question related to Artificial Neural Networks
Question
6 answers
How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I believe the way we view security will change with the advent of Gen AI. Since any lay man will now have access to the most comprehensive and complex scripts(depending on what the model was trained on), it will definitely make it a lot harder to secure the data and infrastructure. My belief is that anything digital and connected is never secure.
We have to accept that our data can be accessed by malicious actors. What we can do is entrap such actors by associating/pegging a tracker and malicious code to all the data we store, and making sure that they can never use/view what they have extracted. So, whenever someone gains access to our data/infrastructure, they not only disclose themselves, but also get compromised through the executable scripts they downloaded. What's important to do is never store any stand alone files, and instead have scripts associated with each file(which shouldn't be able to be removed when extracting this data).
Only certain organization specific software should be allowed to extract the date, in the know that certain scripts will be executed when doing so. Appropriate measures can be taken with respect to specific scripts associated with the data file to prevent the org itself from being the victim.
  • asked a question related to Artificial Neural Networks
Question
2 answers
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
To build such a system, there must be the need to integrate different online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media platforms, and more. By doing so, we can create a comprehensive database that provides up-to-date information on any given topic.
The first step in building this system is to identify and gather relevant sources of information. This includes partnering with online databases and libraries to gain access to their vast collection of resources. Additionally, collaborating with scientific knowledge indexing databases will ensure that the latest research findings are included in our database.
Next, we need to develop algorithms that can efficiently retrieve data from these sources in real-time. These algorithms should be able to filter out irrelevant information and present only the most accurate and reliable data to users.
Once we have gathered and organized the data, it is time to create an intelligent chatbot that can interact with users on the internet. This chatbot should be capable of understanding natural language queries and providing relevant answers based on the available data.
By making this intelligent chatbot available on the internet, users will have instant access to a wealth of up-to-date information at their fingertips. Whether they are looking for scientific research papers or general knowledge about a specific topic, this system will provide them with accurate answers quickly.
  • asked a question related to Artificial Neural Networks
Question
3 answers
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Artificial intelligence (AI) is usually defined as the simulation of human intelligence processes by computer systems. It’s become a very popular term today and thanks to its ubiquitous presence in many industries, new advancements are being made regularly.
AI systems are very much able to replicate aspects of the human mind, but they have a long way to go before they inherit consciousness - something that comes naturally to humans. Yet, while machines lack this sentience, research is underway to embed artificial consciousness (AC) into them.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
9 answers
I have read few articles that used SPSS for Artificial Neural Network analysis with survey data. What is your opinion about the user friendliness of SPSS in this regard? Do you refer any other software package?
Relevant answer
Answer
IBM SPSS or IBM Modeler is good to go. They are easy to learn and use, especially SPSS.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Can the applicability of Big Data Analytics backed by artificial intelligence technology in the field be significantly enhanced when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and executed by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the processing that takes place in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, the digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets obtained from the Internet and realized by the most powerful quantum computers?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The convergence of Big Data Analytics and AI already offers transformative capabilities in analyzing and deriving insights from massive datasets. When you introduce quantum computing into this mix, the potential computational power and speed increase exponentially. Quantum computers, by their very nature, can process vast amounts of data simultaneously, making them ideally suited for complex tasks such as optimization problems, simulations, and certain types of data analysis that classical computers struggle with.
In the context of scientific research, the combination of these technologies can indeed significantly enhance the efficiency and depth of analysis. For instance:
Speed and Efficiency: Quantum computers can potentially solve problems in seconds that would take classical computers millennia. This speed can drastically reduce the time required for data processing and analysis, especially in fields like genomics, climate modeling, and financial modeling.
Complex Simulations: Quantum computers can simulate complex systems more efficiently. This capability can be invaluable in fields like drug discovery, where simulating molecular interactions is crucial.
Optimization Problems: Many research tasks involve finding the best solution among a vast number of possibilities. Quantum computers, combined with AI algorithms, can optimize these solutions more effectively.
Deep Learning: Training deep learning models, especially on vast datasets, is computationally intensive. Quantum-enhanced machine learning can potentially train these models faster and more accurately.
Data Security: Quantum computers also bring advancements in cryptography, ensuring that the massive datasets being analyzed remain secure.
In conclusion, while the practical realization of powerful quantum computers is still an ongoing endeavor, their potential integration with Big Data Analytics and AI promises to usher in a new era of scientific research and analysis, marked by unprecedented speed, accuracy, and depth.
  • asked a question related to Artificial Neural Networks
Question
13 answers
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
While AI has the potential to enhance learning experiences, there is a concern that it may hinder the development of critical thinking skills in students. Therefore, it is crucial to carefully implement AI technologies in education to ensure they continue to foster critical thinking.
One way AI can be integrated into education without compromising critical thinking is by using it as a tool for personalized learning. AI algorithms can analyze students' strengths and weaknesses, tailoring educational content and activities accordingly. This approach encourages students to think critically about their own learning process and identify areas where they need improvement. By providing individualized guidance, AI technology promotes self-reflection and metacognition – key components of critical thinking.
Moreover, AI can facilitate collaborative learning experiences that promote critical thinking skills. Virtual classrooms equipped with AI-powered chatbots or virtual tutors can encourage students to engage in discussions and debates with their peers. These interactions require students to analyze different perspectives, evaluate evidence, and construct well-reasoned arguments – all essential elements of critical thinking.
Additionally, incorporating ethical considerations into the design of AI technologies used in education is crucial for fostering critical thinking skills. Students should be encouraged to question the biases embedded within these systems and critically evaluate the information provided by them. By promoting awareness of ethical issues surrounding AI technologies, educators can empower students to think critically about how these tools are shaping their educational experiences.
However, it is important not to rely solely on AI technologies for teaching core subjects such as mathematics or language arts. Critical thinking involves actively engaging with complex problems and developing analytical reasoning skills – tasks that cannot be fully replaced by machines. Teachers should continue playing a central role in guiding students' development of critical thinking abilities through open-ended discussions, challenging assignments, and hands-on activities.
In conclusion, implementing artificial intelligence technologies in education must be done thoughtfully so as not to hinder the development of critical thinking skills in students. By using AI as a tool for personalized learning, promoting collaborative experiences, incorporating ethical considerations, and maintaining the central role of teachers, we can harness the potential of AI while ensuring that critical thinking remains at the forefront of education.
Reference:
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Educational Technology & Society, 17(4), 49-64.
  • asked a question related to Artificial Neural Networks
Question
4 answers
I now that alot of artificial networks has appeared now. And may be soon we wil not read articles and do our scientific works and AI will help us. May be it is happening now? Wat is your experience working with AI and neural networks in science?
Relevant answer
Answer
Artificial intelligence definitely could help in neuroscience due to the fast development of AI nowadays. During the coronavirus period, AI helped in fast genome sequencing, and consequently, very fast vaccines have been developed. Similarly in neuroscience requirement for real-time analysis during the treatment of neuroscience patients to find out new proteins and genes for particular functions and diseases also.
  • asked a question related to Artificial Neural Networks
Question
4 answers
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz
Relevant answer
Answer
In today's world AI is the hot topic
of modern digital era but that is not ensure
AI not able to replace human intelligence
  • asked a question related to Artificial Neural Networks
Question
6 answers
aa
Relevant answer
Answer
Deep learning and artificial neural networks (ANNs) are related concepts, but they are not exactly the same thing. Let me explain the difference between them:
Artificial Neural Networks (ANNs):
Artificial Neural Networks are a computational model inspired by the structure and function of biological neural networks in the human brain. ANNs consist of interconnected nodes called artificial neurons or perceptrons. These neurons are organized in layers, typically an input layer, one or more hidden layers, and an output layer. Each neuron takes inputs, performs a computation on them, and produces an output that is passed to the next layer. The connections between neurons are associated with weights that determine the strength of the connection. ANNs are designed to learn and generalize from examples by adjusting the weights through a process called training. The training is typically done using techniques like backpropagation and gradient descent.
Deep Learning:
Deep learning is a subfield of machine learning that focuses on algorithms and models inspired by the structure and function of the human brain, particularly artificial neural networks with multiple hidden layers. The term "deep" in deep learning refers to the presence of multiple layers in the neural network architecture. Deep learning models are characterized by their ability to automatically learn hierarchical representations of data by sequentially processing information through multiple layers. These models have shown exceptional performance in various tasks such as image and speech recognition, natural language processing, and many others. Deep learning models often require a large amount of labeled data for training and rely on powerful computational resources, such as graphics processing units (GPUs), due to their complexity.
  • asked a question related to Artificial Neural Networks
Question
16 answers
If neural networks adopt the principle of deep learning, why haven't they been able to create their own language for communication today?
Relevant answer
Answer
Sort Answer: Language is a tool for conceptual beings that form, process (i.e., think) and communicate information conceptually. Artificial Intelligence is incapable (as of today) of forming or processing information conceptually--they can mimic making use of language for sure, like a very convincing parrot. Don't confuse parroting conceptual tools with thinking conceptually.
  • asked a question related to Artificial Neural Networks
Question
5 answers
What areas of application of artificial neural networks in information technology are now the most promising (except for pattern recognition, chatbots)? Probably, AI applications in Big Data, Data Science, various kinds of forecasting (for example, time series)? I consider these areas (Big Data, Data Science) important, because after the inevitable obsolescence of modern artificial neural networks, these applications will still remain relevant, only new technologies will work in them. Big data in the world will not disappear anywhere and will always need to be processed - with any available technology.
Forbes.ru has the following article: Applications of Artificial Intelligence Across Various Industries (https://www.forbes.com/sites/qai/2023/01/06/applications-of-artificial-intelligence/).
Relevant answer
Answer
Health Care- Diagnosis of illnesses and development of new drugs.
  • asked a question related to Artificial Neural Networks
Question
4 answers
Could you give me some advices, please? Is there any method to determine the number of hidden layers and hidden nodes required to produce a good accuracy of Artificial Neural Networks especially Deep Learning? I will be so glad if you could answer or give me a reference link about this. Thank you in advance.
Relevant answer
Answer
Based on references [1][2], you can choose the Number of Hidden Layers as follows:
  1. If the data is linearly separable, then you may not need any hidden layers
  2. If the data is less complex + low dimensions/features -> 1 to 2 hidden layers
  3. If data has large dimensions/features, -> 3 to 5 hidden layers
Meanwhile, there are many rule-of-thumb methods for determining the Number of Nodes in Hidden Layers. According to Jeff Heaton[3], the rule of thumbs are:
  1. The number of hidden neurons should be between the size of the input layer and the size of the output layer
  2. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer
  3. The number of hidden neurons should be less than twice the size of the input layer
Having said so, different tasks/datasets required different design. You can follow the above guidelines, and then fine tune based on your experiments. Meanwhile, you can analyze the complexity of your multi-layer perceptron as shown in [4].
Good luck trying :)
References:
[3] Introduction to neural networks with Java
  • asked a question related to Artificial Neural Networks
Question
3 answers
Artificial neural networks are considered one of the most important and most advanced sciences at the present time, and they have many applications in various sciences. They also have their own specialized experts.
Relevant answer
Answer
Artificial Neural Networks contain artificial neurons which are called units. These units are arranged in a series of layers that together constitute the whole Artificial Neural Network in a system. A layer can have only a dozen units or millions of units as this depends on how the complex neural networks will be required to learn the hidden patterns in the dataset. Commonly, Artificial Neural Network has an input layer, an output layer as well as hidden layers. The input layer receives data from the outside world which the neural network needs to analyze or learn about. Then this data passes through one or multiple hidden layers that transform the input into data that is valuable for the output layer. Finally, the output layer provides an output in the form of a response of the Artificial Neural Networks to input data provided.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
5 answers
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Relevant answer
Answer
Raid, apologies here's the attachment. David Booth
  • asked a question related to Artificial Neural Networks
Question
6 answers
Hi. I'm generating the amount of data required for training an artificial neural network (ANN) using a reliable and validated self-developed numerical code. Is it right approach?
Or the necessary data should be produced only with experimental tests ?
Best Regards
Saeed
Relevant answer
Answer
Will data from a factorial design used to train an ANN catch higher order interactions more accurately?@tapanbagchi
  • asked a question related to Artificial Neural Networks
Question
1 answer
optimisation et prédiction du ratio KWH/m3 dans une station de pompage par la modélisation mathématique régression linéaire multiple RLM et réseaux de neurones artificiels RNA
  • asked a question related to Artificial Neural Networks
Question
3 answers
Currently, some researchers count on AI programs to enhance their problem solving and to get the favorable results, especially at multi-parameter preparations research... Artificial neural network, Central composite design, Response surface methodology are some helpful examples that can provide researchers by the strong options and choices for each criterion to optimal consequences. So, we strongly recommend using them to investigate on how some conventional extraction methods can be worthwhile.
Relevant answer
Answer
Dear university staff!
I inform you that my lecture on electronic medicine on the topic: "The use of automated system-cognitive analysis for the classification of human organ tumors" can be downloaded from the site: https://www.patreon.com/user?u =87599532 Lecture with sound in English. You can download it and listen to it at your convenience.
Sincerely,
Vladimir Ryabtsev, Doctor of Technical Science, Professor Information Technologies.
  • asked a question related to Artificial Neural Networks
Question
6 answers
How can I calculate ANOVA table for the quadratic model by Python?
I want to calculathe a table like the one I uploaded in the image.
Relevant answer
Answer
To calculate an ANOVA (Analysis of Variance) table for a quadratic model in Python, you can use the statsmodels library. Here is an example of how you can do this:
#################################
import statsmodels.api as sm
# Fit the quadratic model using OLS (Ordinary Least Squares)
model = sm.OLS.from_formula('y ~ x + np.power(x, 2)', data=df)
results = model.fit()
# Print the ANOVA table
print(sm.stats.anova_lm(results, typ=2))
#################################
In this example, df is a Pandas DataFrame that contains the variables y and x. The formula 'y ~ x + np.power(x, 2)' specifies that y is the dependent variable and x and x^2 are the independent variables. The from_formula() method is used to fit the model using OLS. The fit() method is then used to estimate the model parameters.
The anova_lm() function is used to calculate the ANOVA table for the model. The typ parameter specifies the type of ANOVA table to compute, with typ=2 corresponding to a Type II ANOVA table.
This code will print the ANOVA table to the console, with columns for the source of variance, degrees of freedom, sum of squares, mean squares, and F-statistic. You can also access the individual elements of the ANOVA table using the results object, for example:
#################################
# Print the F-statistic and p-value
print(results.fvalue)
print(results.f_pvalue)
#################################
I hope that helps
  • asked a question related to Artificial Neural Networks
Question
2 answers
How to write the python script for Strength of Double Skin steel concrete composite wall using Artificial Neural Network. I have attached the figure for your reference.
Relevant answer
Answer
import numpy as np
from sklearn.neural_network import MLPRegressor
# Load the data from a file or other source
X = ... # input features (e.g., thickness of steel layer, concrete strength, etc.)
y = ... # target strength values
# Split the data into training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Standardize the data (optional)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Define the model
model = MLPRegressor(hidden_layer_sizes=(100, 50, 25), max_iter=1000)
# Train the model
model.fit(X_train, y_train)
# Evaluate the model on the test set
score = model.score(X_test, y_test)
# Print the R^2 score
print(f'Test R^2 score: {score:.3f}')
  • asked a question related to Artificial Neural Networks
Question
21 answers
Hi everyone
I used the Neural Network in MATLAB using inputs and target data. How can I create an equation that correctly estimates the target??
(Based on the ANN created, weights, biases, and related inputs)
Is there a method, tool, or idea to solve this issue?
Relevant answer
Answer
To create an equation based on an ANN, you will need to specify the input variables and the desired output, and then design and train the ANN to learn the relationship between the inputs and the output. This typically involves the following steps:
  1. Preprocess the data: Clean and normalize the input data to prepare it for use in the ANN.
  2. Design the network architecture: Determine the number and type of layers to use in the ANN, as well as the number of neurons in each layer.
  3. Train the network: Use an optimization algorithm (such as backpropagation) to adjust the weights and biases of the neurons in the network to minimize the error between the predicted output and the desired output.
  4. Test the network: Evaluate the performance of the trained ANN on a separate dataset to assess its accuracy.
Once the ANN has been trained and tested, you can use it to make predictions for new data by feeding the input data through the network and using the output from the final layer as the predicted output.
  • asked a question related to Artificial Neural Networks
Question
3 answers
How to write the python script for Strength of Concrete using Artificial Neural Network in matlab?
  • asked a question related to Artificial Neural Networks
Question
3 answers
In this digital world, with increasing digital devices and data, security is a significant concern. And most cases DOS/DDoS/EDoS attacks are performed by the botnet. I want to do research to detect and prevent botnets. Can you share an efficient research title to detect and prevent botnets?
Relevant answer
Answer
Dear Md. Alamgir Hossain,
You may want to look over the following sources:
Intelligent Detection of IoT Botnets Using Machine Learning and Deep Learning
_____
_____
Intelligent Detection of IoT Botnets Using Machine Learning and Deep Learning
_____
_____
Deep Neural Networks for Bot Detection
_____
_____
  • asked a question related to Artificial Neural Networks
Question
4 answers
For time-series forecasting, I'm using an LSTM network. Are there any metrics that could be used to evaluate the forecasting model's generalization throughout the training phase, i.e., whether it is neither overfitting nor underfitting? To check that the network is not overfitted, for instance, we can look at both the training loss and validation loss curves. Can such overfitting or underfitting be detected using any tables?
Relevant answer
Answer
Thank you so much for responding to my question. Yes, I believe the training loss and the validation loss curves provide a good way to visualize the generalization of the network.
Also, I have found many papers using training and validation loss scores, which is the final MSE value that the algorithm calculates while training the model.
  • asked a question related to Artificial Neural Networks
Question
5 answers
How might artificial neural networks influence the development of microbiology? It will be interesting to gain your thoughts on potential future insights and technologies.
Relevant answer
Answer
Kindly go through this review article
You can also find applications of AI in detection of antimicrobial resistance.
  • asked a question related to Artificial Neural Networks
Question
5 answers
I have computed an artifical neural network analysis using Multilayer Perceptron method in SPSS. From the output of this model, i can see the weights evaluated at hidden layers. However, i do not know how to write down the actual equation using these weights as like an equation that we obtain from linear regression model using the parameter estimates. Please help me for this.
Relevant answer
Answer
As Professor David Eugene Booth has pointed out, you won't get an equation out of SPSS Multilayer Perceptron. Imagine a model which has a lot (potentially thousands) of statements like "if x1 then y1" and "if x1 and x2 then y2", and so on. They're "black boxes".
Recent advancements point to "Explainable Neural Networks", which are not yet fully explainable.
These articles may help.
  • asked a question related to Artificial Neural Networks
Question
3 answers
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
Relevant answer
Answer
As a first guess I would agree with Aparna Sathya Murthy given that you retain the original image size. If you segment and extract the contents from the image in a size where the dominating elements are the relevant contents for the CNN, then the noise will be less and maybe labeling will not be needed(emphasis in maybe).
  • asked a question related to Artificial Neural Networks
Question
9 answers
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.
Relevant answer
Answer
Mathematically, the answer to the equation is zero; the answer Python spat out is pretty much as close as you can get to the representation of zero with a typical computer.
This is a classic floating point problem: https://en.wikipedia.org/wiki/Floating-point_error_mitigation
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hi all,
I'm working on hand gesture recognition, I have worked on LS-SVM, and now I'm working on ANN-based hand gesture recognition, just need to find a potential research gap, and the road map. Any suggestions?
Relevant answer
Answer
Dear Hassam Iqbal,
A straightforward way to find a research gap or research question or challenge over the problem at hand, e.g., in your case gesture recognition. The best and unique way is to start literature review. In literature review you will have to read about the previous works over gesture recognition. You may download various research papers over techniques/algorithms for the problem at hand. In the review processes 3 to 6 months or upto year (first year of your PhD), you will understand the research gap or research question. You will then have to adopt some methodology to solve it.
Best Luck with your research proposal & literature reviews.
  • asked a question related to Artificial Neural Networks
Question
7 answers
I'm using GRU to forecast the next day's temperature with a minute resolution; the model will take today's minutes as input and produce tomorrow's temperature outputs; the code I'm using is below. My issue is whether I should use 1440 units with the dense layer since I forecast 1440 time steps or simply one unit because I predict only one variable.
model.add(layers.GRU(200,activation='tanh', recurrent_activation='sigmoid', input_shape=(1440,1)))
model.add(layers.Dropout(rate=0.1))
model.add(layers.Dense(1440))
model.compile(loss='mse',optimizer='adam')
Relevant answer
Answer
Dear Ammar Atif ,
So basically you want to predict only single neuron . Well basically there is no hard and fast rule for that but as i can see in your code. you can use number of neuron must be one. But you can try both the models also and after testing you can get which one prefer best.
You must read the concept of Pruning of layers.
Here are few links which might be helpfull you for enhancement of your knowldege and research purpose also .
Thanks Hope i will clarify your doubts.
  • asked a question related to Artificial Neural Networks
Question
4 answers
For my undergraduate thesis, I want to work on, "The application of Artificial Neural Network -(ANN) in the development of a SCM distribution network". Want some opinion about the trend/future of this topic & some suggestions as a beginner.
Relevant answer
Answer
Mahie Islam, Pretty quiet to deploy the AI family to the supply chain problems that are mostly being addressed using the classical black-box modeling of supply chain optimization at the strategic decision.
SC is an area that is highly vulnerable to such advent of information technology and revolution that is mainly characterized by the proliferation of data, economic globalization, and dynamic customer expectation. A yet powerful full predictive method has become mandatory for data-driven than model-driven decision making in order to expose underlying uncertainties.
While possible to have a surrogate model through simulation, this approach still is expensive and doesn't give a clear cut for decision making. The importance of AI through the Machine learning method, however, helps to have either an interpretable, accurate, or both if some like support vector machine are likely to deploy.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Can any one please share some good articles related to these two topics.
  • Machine learning for Agri-Food 4.0 development,
  • Artificial neural networks for Agri-Food 4.0 analysis,
Relevant answer
Answer
Machine Learning in Agriculture: Applications and Techniques | by Sciforce | Sciforce | Medium
  • asked a question related to Artificial Neural Networks
Question
3 answers
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
Relevant answer
Answer
There is three main types of feature selection, filtering methods, wrapper methods, and embedded methods. Filtering methods use criteria based metrics that are independent to the modeling process and uses criteria such as mutual information, correlation or Chi square test to check each feature or a selection of features compared with the target. Other type of filtering methods includes variance thresholding and ANOVA. Wrapper methods uses error rates to help train individual models or subsets of features iteratively to select the critical features. Subsets can be selected Sequential Forward Selection, sequential backwards selection, bidirectional selection or randomally. With selecting features and training they are therefore more computationally expensive than filtering methods. There are heuristic approaches too such as Branch and Bound Search that are non exhausted searches. In some cases filtering methods are used before wrapper methods. Embedded methods includes use of decision trees or random forests for extracting feature importance for deciding which features to select. Overall feedforward, backward and bidrectional methods are stepwise methods for searching for crucial features. In regards to beam search which is more of a graph based heuristic optimization method that is similar to Best first search , that can be seen applied in neural network optimization or tree optimization rather than direct as a feature selection method.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Can you give me advice, how to solve constrained problems with ANNs? I can name two common scenarios where this would be benefiting both the accuracy and learning performance
  • Predict physical values that are solely non-negative (pressure, concentration, mass)
  • Predict state of system with oblivious limitations - e.g., volumetric mixture of components cannot have total % sum higher than 100 %
So, my question is how I should work in such cases? I prefer MATLAB, but if it is not possible with any of its toolboxes, I'm also open for other recommendations.
Edit: Just to clarify, the question is not about creating and training of ANN itself. I need to know how to implement linear constraint function to output, something like reinforced learning.
Relevant answer
Answer
Karol Postawa Neural Network Design Workflow:
1. Gather data.
2. Create the network by selecting Create Neural Network Object.
3. Set up the network — Set up Shallow Neural Network Inputs and Outputs.
4. Set up the weights and biases.
5. Train the network — Concepts of Neural Network Training
6. Verify the network.
7. Make use of the network.
  • asked a question related to Artificial Neural Networks
Question
3 answers
I want data sets of blood and bone cancer.I want to sequence it in the python by the use of Artificial Neural Networks
  • asked a question related to Artificial Neural Networks
Question
2 answers
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Relevant answer
Answer
Armin Hajighasem Kashani Non-linear data may be simply handled and processed using a neural network that is otherwise difficult in perceptron and sigmoid neurons. In neural networks, the agonizing decision boundary problem is reduced.
However, the downsides include the loss of neighborhood knowledge, the addition of more parameters to optimize, and the lack of translation invariance.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Is there an index that includes the mixing index and pressure drop for micromixers?
For example, for heat exchangers and heat sinks, there is an index that includes heat transfer performance and hydraulic performance, which is presented below:
η=(Nu/Nub)/(f/fb)1/3
The purpose of these indexes is to create a general criterion to check the devices' overall performance and investigate the parameters' impact.
Is there an index that includes mixing performance and hydraulic performance for micromixers?
Relevant answer
Answer
Dear
Rani P Ramachandran
Thank you for your answer. I think mixing energy cost (MEC) is the index that I look for it.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Dear Scholars,
I would like to solve a Fluid Mechanic optimization problem that requires the implementation of an optimization algorithm together with Artificial Neural network. I had some questions about Convex optimization algorithm and I would appreciate it if you could give some advice to me.
My question is about possibility of implementing of the Convex optimization together with Artificial Neural Network to find a unique solution for a multi-objective optimization problem. The optimization problem that I am trying to code is explained as the following equations. The objective function utilized in the optimization problem is defined in the following equation:
📷
Where OF is the objective function, wi are the weights assigned to each cost function, Oi is the ith cost function defined as the relative difference between the experimental and the predicted evaporation metrics for fuel droplet (denoted by superscript of exp and mdl, respectively), k is the number of cost functions (k = 4, equal to the number of evaporation metrics), c [c1, c2, and c3] is the mass fraction vector defining the blend of three components, subjected to the following constrains:
📷
Due to high CPU time required for modeling and calculating the objective functions (OF), an ANN was trained based on some tabulated data from modeling of fuel droplet evaporation and used for calculating the OF through optimization iteration.
In the same manner, the wi values are subjected to optimization during the minimization of OF, with the following constraints:
📷
It worth mentioning that, I have solved this problem by employing Genetic Algorithm together with ANN. Although, the iterative process for solving the problem converged to acceptable solutions. But, the algorithm did not return a unique solution.
Regarding that, I would ask you about possibility of using Convex optimization algorithm together with ANN for solving the aforementioned problem to achieve a unique solution. In case of such feasibility, I would appreciate it if you could mention some relevant publications.
Relevant answer
Answer
Switching your optimization algorithm will probably not give you the unique solution that you are looking for. In general the loss function of a neural network is not convex with respect to the parameters. This means that you will have different local minima or saddle points in your loss function. A convex optimization algorithm will converge to one of these points depending on its starting point. Finding the global minimum is a very hard problem and we don't know how to find it, or how to know if we have found it. This is a well known problem of neural networks. Of course, if you want to get the same solution every time that you run your algorithm you can simply set the initial parameters to a fixed value, so the algorithm always starts at the same place. If you do that, then algorithms like gradient descent will always stop at the same point...
  • asked a question related to Artificial Neural Networks
Question
4 answers
i want to model my adsorption data by ANN
Relevant answer
Answer
Shafagat Mahmudova thank you so much
  • asked a question related to Artificial Neural Networks
Question
4 answers
I'm doing a project to detect signs of Alzheimer's related macular degeneration, for which I would require a dataset of healthy and AD retinal images (ideally also in different stages of the disease), any suggestions of pre-existing datasets or how I might go about cobbling one together? Size and quality of the dataset aren't super high priority as it's a small POC.
Relevant answer
Hi did you find the dataset of retinal imaging for the detection of Alzheimer's?
Could you help me to find some database for this subject to?
Thank you in advance
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hi everyone,
I have collected a set of experimental data regarding the strength of a composite material. Besides quantitative data (dimensions and mechanical properties of the materials), linguistic variables, such as the type of composite material, are also included in data as the parameters affecting the material strength. I am trying to use ANN/ANFIS to predict the strength based on the mentioned variables. How is it possible to train a neural system with linguistic inputs included?
Any comments are appreciated.
Regards,
Keyvan
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hello,
I want to do work on the artificial neural networking model and the implement for environmental parameters. But I am stuck in the model testing phase. I want to do this work in the R statistical software. I installed everything Keras, TensorFlow, but I am not able to interpreter the result from the analysis. If anybody know about the procedure how to test the model, how to interpret, please help. with any other useful software advice also welcome.
Relevant answer
Answer
Using ROC and AUC
  • asked a question related to Artificial Neural Networks
Question
7 answers
I have built a regression artificial neural network model. However, whenever I try to retrain the model I get a wide range of results in R% accuracy and also MSE. I got 97% of accuracy and also 10%. I used neural fitting application in MatLab 2021a and the levenberg-marquardt algorithm to train my model.
I know that different results happen due to different partitioning in Data sets( Learning data set, Validationset, and test set) but how can I get reliable results?
My data set size is 100, I can't make it bigger due to lack of proper experimental Data.
Relevant answer
Answer
  • Data Collection & Data Analysis
  • Preprocessing of Data
  • Normalize Data
  • Data Augmentation
  • Splitting Data: Training Set, Validation Set , Test Set
  • Choosing a model according to dataset (type of data)
  • Training Options
  • Performance Evaluation using Confusion matrix, Accuracy, F1 score, Precision, Recall
  • Re-train model to get better accuracy (you can train model iteratively)
  • asked a question related to Artificial Neural Networks
Question
7 answers
A multilayer perceptron (MLP) is a class of feedforward artificial neural networks (ANN), but in which cases are MLP considered a deep learning method?
Relevant answer
Answer
First of all, I agree with Stam Nicolis and Shafagat Mahmudova.
This is a question of terminology. Sometimes I see people refer to deep neural networks as "multi-layered perceptrons", why is this? A perceptron, I was taught, is a single layer classifier (or regressor) with a binary threshold output using a specific way of training the weights (not backpropagation). If the output of the perceptron doesn't match the target output, we add or subtract the input vector to the weights (depending on if the perceptron gave a false positive or a false negative). It's a quite primitive machine learning algorithm. The training procedure doesn't appear to generalize to a multi-layer case (at least not without modification). A deep neural network is trained via backpropagation which uses the chain rule to propagate gradients of the cost function back through all the weights of the network.
So, the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"? If so, why is this terminology used? It seems to be unnecessarily confusing. In addition, assuming the terminology is somewhat interchangeable, I've only seen the terminology "multi-layer perceptron" when referring to a feed-forward network made up of fully connected layers (no convolutional layers, or recurrent connections). How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP?
Good luck and Happy Model Training !
Samawel JABALLI
  • asked a question related to Artificial Neural Networks
Question
4 answers
Please Who can help me how can i plot Artificial Neural Networks with multiple inputs and outputs using matlab?
I need to visualize my multi-objective optimization case. the fig is attatched as follow.
  • asked a question related to Artificial Neural Networks
Question
11 answers
I have been doing research on different issues in the Finance and Accounting discipline for about 5 years. It becomes difficult for me to find some topics which may lead me to do projects, a series of research articles, working papers in the next 5-10 years. There are few journals which have updated research articles in line with the current and future research demand. Therefore, I am looking for such journal(s) that can help me as a guide to design research project that can contribute in the next 5-10 years.
Relevant answer
Answer
You don't need to look for any journals.
All you need to do is narrow your search to topics listed in "special issues" and "call for papers". Top publishers e.g. elsevier, wiley, T&F, Emerald, etc., often advertise call for papers and special issues of journals. The topics in the special issue or call for paper can give you some hint on current and future research trends. I think this is the standard practice in academia.
I hope this advice helps.
  • asked a question related to Artificial Neural Networks
Question
4 answers
i am working on noise level prediction, all needed data has been collected. the permissible exposure limit and recommended exposure limit noise level has also been determined. i want to predict the noise level for the next 10-20 years with Artificial neural network model, please i need help. thanks
Relevant answer
Answer
DEEP LEARNING-BASED CANCER CLASSIFICATION FOR MICROARRAY DATA: A SYSTEMATIC REVIEW
  • asked a question related to Artificial Neural Networks
Question
4 answers
In his name is the judge
Hi
In order to design controler for my damper ( wich is tlcgd), i want to use fuzzy system.
So i have to optimize rules for fuzzy controler. i want to know for optimizition rules of fuzzy systems wich one is the best genetiz algorithm or Artificial neural network?
Wish you best
Take refuge in the right.
Relevant answer
Answer
I strongly recommend the usage of a floating fuzzy control algorithm since it allows you to change the range of your membership function in real-time so you can adapt your controller at each time step.
Regards.
  • asked a question related to Artificial Neural Networks
Question
13 answers
I am searching for some algorithms for feature extraction from images which I want to classify using machine learning. I have heard only about SIFT, I have images of buildings and flowers to classify. Other than SIFT, what are some good algorithms.
Relevant answer
Answer
It depends on features you are trying to extract from the image. Another feature extraction technique you can use is Histogram of Oriented Gradient(HOG) which counts the occurrence of gradient orientation in a localized portion of the image. It has proven to have good recognition accuracy when using machine learning algorithms.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Spectroscopy is said be easier and cheaper for soil chemical property analysis. how well does it perform in mineralogical studies? also how well does the data set calibration and validation tests yields any relevant results through machine learning and artificial neural network in this field?
I basically belong to non programming background, I do know moderate application of R-Studio in PLRS and basic training set and validation set preparation.
  • asked a question related to Artificial Neural Networks
Question
5 answers
Codes for artificial Neural networks
Relevant answer
Answer
search it in github. for example: https://github.com/search?q=anfis&type=Repositories
  • asked a question related to Artificial Neural Networks
Question
3 answers
Does any one have R code for Statistical downscaling of GCMs using Artificial Neural network? I need this R code for my studying
Relevant answer
Answer
  • asked a question related to Artificial Neural Networks
Question
2 answers
Any short introductory document from image domain please.
Relevant answer
Answer
In general, the linear feature is easier to distinguish than the nonlinear feature.
  • asked a question related to Artificial Neural Networks
Question
6 answers
Good day,
My name is Philips Sanni. I am currently rounding up my MSc degree in Software Engineering and am currently in search of a university where I can study for a Ph.D.in a related field but most preferably in the area of Artificial Neural Networks.
And if you are a professor in a need of a Doctoral Student, kindly send the details of your research and how I can apply to your university.
Relevant answer
Answer
Service Engineering
Digital Ecosystems
Semantic Web / Linked Data
  • asked a question related to Artificial Neural Networks
Question
5 answers
Visualization of approximation function learned through ANN for a regression problem.
The ANN has 5 hidden layers with 20 neurons in each layer.
Relevant answer
Answer
Hi Darko, I want to use those equations in my modeling work. If you know any method, please share.
Thanks,
  • asked a question related to Artificial Neural Networks
Question
6 answers
I have an ongoing project utilizing ANN, I want to know how to measure the accuracy in terms of percentage. Thank you
Relevant answer
To check the accuracy of the artificial neural network model in MATLAB, you can check the Regression value, MSE, and Error histogram.
high R, less MSE, Fewer errors will be good for your network.
Also, check this video:
  • asked a question related to Artificial Neural Networks
Question
3 answers
Dear collegues,
I try to a neural network.I normalized data with the minimum and maximum:
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
and the results:
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result).
So I can see the actual and predicted values only in normalized form.
Could you tell me please,how do I scale the real and predicted data back into the "unscaled" range?
P.s. minvec <- sapply(mydata,min)maxvec <- sapply(mydata,max)
> denormalize <- function(x,minval,maxval) {
+ x*(maxval-minval) + minval
doesn't work correct in my case.
Thanks a lot for your answers
Relevant answer
Answer
It actually works (but you have to consider rounding errors):
normalize <- function(x, min, max) (x-min)/(max-min)
denormalize <- function(x, min, max) x*(max-min)+min
x <- rnorm(1000)
r <- range(x)
nx <- normalize(x, r[1], r[2])
dnx <- denormalize(nx, r[1], r[2])
all(x == dnx)
# FALSE ---> rounding errors
all(abs(x - dnx) < 1E-8)
# TRUE ---> identical up to tiny rounding errors
  • asked a question related to Artificial Neural Networks
Question
5 answers
Suppose a smart meter is connected to the mainline of the network. Would it be wise to say that the data captured through this meter can be used for fault location in the sub-lateral branches of the line in the same network?
Attached is the figure. The SM is connected to the mainline from where lateral branches go to loads and other sources etc. Suppose, at t = 4 ; fault 3 occurs while the rest of the sections are healthy, what could be the possible approach to locate fault 3 if we have only one meter connected at main bus (line).
Relevant answer
Answer
Would it be practical to train a neural network (NN) to diagnose faults based on inputs from the sensor (S).
{ sensor performance inputs } -> NN -> output specific fault F1 or F2 or F3
The training data set would define (discover) unique sensor properties when each specific fault was purposely made.
Because there could be many combinations of faults, for example for F1, F2, F3 single faults
no faults
F1 fault
F2 fault
F3 fault
there are 4 possibilities. But the number of fault conditions increases rapidly for multiple faults with many devices, Fi, i = 1, n. So the AI approach may not be practical.
  • asked a question related to Artificial Neural Networks
Question
17 answers
According to which article, research or reference, in learning process of neural networks, 70% of the dataset is usually considered for training and 30% of its for testing?
In other words, who in the world and in which research or book for the first time raised this issue and explained it in detail?
I desperately need a reference to the above.
Relevant answer
Answer
I believe the goal here is to prevent over-fitting your model. As suggested by other researchers, this is not a fixed a value. In fact on my case I normally do 20% testing and 80% training.
  • asked a question related to Artificial Neural Networks
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Artificial Neural Networks
Question
13 answers
I'm searching about autoencoders and their application in machine learning issues. But I have a fundamental question.
As we all know, there are various types of autoencoders, such as ​Stack Autoencoder, Sparse Autoencoder, Denoising Autoencoder, Adversarial Autoencoder, Convolutional Autoencoder, Semi- Autoencoder, Dual Autoencoder, Contractive Autoencoder, and others that are better versions of what we had before. Autoencoder is also known to be used in Graph Networks (GN), Recommender Systems(RS), Natural Language Processing (NLP), and Machine Vision (CV). This is my main concern:
Because the input and structure of each of these machine learning problems are different, which version of Autoencoder is appropriate for which machine learning problem.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
1 answer
I used Matlab functions in order to train NARX model. When i use Levenberg-Marquardt as training algorithm results come good than Bayesian regularization and Scaled conjugate gradient is the worst one . I need to know why LM perform beter than BR Although BR used LM optimization to update network weights and bias ?
Relevant answer
Answer
It is possible that the model based on Levenberg–Marquardt training
algorithm gates overfitting of training data.
  • asked a question related to Artificial Neural Networks
Question
4 answers
I'm interested to compare multiple linear regression and artificial neural networks in predicting the production potential of animals using certain dependent variables. However, i have obtained negative R squared values at ceratin model architecture. please explain the reason for neagative prediciton accuracy or R square values.
Relevant answer
Answer
If R2 for your regression is negative, it means that your regression predicts worse than the simple mean value predictor (e.g., when you simply predict that y = mean(x)).
  • asked a question related to Artificial Neural Networks
Question
4 answers
What is the new development in the LSTM model that also combines layers with LSTM?
Relevant answer
Answer
LSTM architecture still has the same three gates Input, forget, Output Gates nothing changes and you can add more than one layer with multiple nodes. or you can combine more than one model to be like Stacked LSTM, Bidirectional LSTM, CNN LSTM
  • asked a question related to Artificial Neural Networks
Question
5 answers
Hi,
I would like to ask where I can get a dataset for an Artificial Neural Network I would like to build for the prediction of biogas yield and methane content. Ideally, I will require at least 500 cases.
I have searched many articles on databases, but so far I have been able to download only 1 dataset from the article (Modeling of biogas production from food, fruits and vegetables wastes using artificial neural network (ANN), Goncalves, et al, 2021)
The input data for my ANN will be: OLR (g VS/l.d), HRT, Temperature, pH, Reactor volume and the expected outputs are biogas yield (L/(g VS) and methane content (%).
Can someone please share some information as to where I can get a dataset for my project or if anyone would like to share the data from their experiments, it will be much appreciated.
Regards
W
Relevant answer
Answer
Ramón Piloto-Rodríguez YES I am about to publish a paper on this and working towards the neura network dataset for biogas production as well
  • asked a question related to Artificial Neural Networks
Question
3 answers
Has anyone experience with the software NeuralDesigner?
Can anyone recommend this software oder recommend another software?
I have to perform classification and prediction tasks with different type of data (position data, time values, nominal data, ...).
Thanks in advance!
Relevant answer
Answer
The best way is to some good grip on Python [ https://docs.python.org/3/tutorial/ ], and then to have some good practice over PyTorch [https://pytorch.org/tutorials/] OR TensorFlow [https://www.tensorflow.org/tutorials].
  • asked a question related to Artificial Neural Networks
Question
6 answers
I intend to study to predict stock exchange index, but confused between RBF and MLP, So I want to know by which type is the more suitable for this purpose
Relevant answer
Answer
Usually, they follow a random walk process. Good luck!
  • asked a question related to Artificial Neural Networks
Question
8 answers
In neural networks, I have understood that the activation function at the Hidden Layer make the inputs in specific range like (0, 1) or (-1, 1), and do solve the nonlinear problems, But what does it do in the Output layer, Can get simply explanation, because I'm not specialist
Relevant answer
  • asked a question related to Artificial Neural Networks
Question
4 answers
I'm Finance and Banking student, have research about predicting the stock price using neural networks, tried to learn Matlab software, but had difficulty with the use it because it contains many features like NAR, NARX, which I didn't understand, So I want to guide me to the best software among these above, and what the differences between them
Relevant answer
Answer
The SPSS and R are both better for statistical data analysis, whilst MATLAB do almost every thing. Once you get good base/concept of MATLAB, you may be able to write your codes for almost every thing. MATLAB is most scientific, much better for Mathematics, Statistics, all research/application areas of Artificial Intelligence Engineering and medical. In one sentence::: MATLAB is ALL in ALL, very simple and easy to understand, however., it may need good base of basic intermediate Mathematics. I strongly recommend MATLAB. You can do very good object oriented programming or codes related to finance or other social sciences as well.
  • asked a question related to Artificial Neural Networks
Question
7 answers
Dear all,
I have prepared a data set and I am seeking to apply my data on a multi-input multi-output algorithm. I appreciate it if you let me know how can I download such algorithms? I mean I have not come across any particular website that offers these types of ANN algorithms.
FYI, I am trying to tune an array of inputs onto an array of targets. So, please help me out with my query as declared above.
Thanks
  • asked a question related to Artificial Neural Networks
Question
7 answers
I have implemented a quantized neural network for classification purposes. I wonder is there any Inverse quantization approach like inverse transform for prediction data?
I appreciate your answers.
Relevant answer
Answer
This is still an open problem, however., there is still an increased interest in this topic. I would therefore suggest you to focus on it via a further literature review + your contribution over inverse quantization problems. Some related literature of your interest may be : https://www.dsprelated.com/thread/8399/inverse-quantization
Further Ride over Google to the goals........Hints etc.,
  • asked a question related to Artificial Neural Networks
Question
12 answers
The conventional PID controller does not give me satisfactory closed-loop time response characteristics of my plant. Hence, I quest for a novel artificial neural network algorithm that can optimize the tunned PID gains. I think any variant of the Rprop implemented in MATLAB/Simulink will not be a bad idea of a starting point. Also, I need help with such an algorithm in MATLAB/Simulink files to fast-track this feat. Then I can integrate it into a set-point feedback scheme for simulation.
Relevant answer
Answer
Dear Bhar Kisabo Aliyu,
I suggest applying an Artificial Bee Colony Algorithm. More information you can find in this paper:
or here:
Regards,
Tomasz
  • asked a question related to Artificial Neural Networks
Question
7 answers
Hi, I am using CNN for flood susceptibility mapping and would like to find the feature contribution using Shap, I used the test data set to calculate shap_values but I am getting the following error:
"ValueError: shape mismatch: objects cannot be broadcast to a single shape"
and here is my code. e = shap.DeepExplainer(model, X_test) shap_values = e.shap_values(X_test) shap.summary_plot(shap_values[0], X_test, plot_type="bar")
X_test is an array with dimensions (787, 23, 23, 11) and shap_values [0] is also an array with the same dimensions 787 image, image size is 23 X 23 and has 11 bands.
I would be grateful if someone can help.
  • asked a question related to Artificial Neural Networks
Question
4 answers
Artificial Neural Network (ANN)
Relevant answer
Answer
The Pytroch
  • asked a question related to Artificial Neural Networks
Question
6 answers
In psychology, attention is the cognitive process of selectively concentrating on one or a few things while ignoring others. A neural network is considered to be an effort to mimic human brain actions in a simplified manner. Attention Mechanism is also an attempt to implement the same action of selectively concentrating on a few relevant things while ignoring others in deep neural networks. So, is the attention mechanism able to make an artificial neural network focus on a specific target?
Relevant answer
Answer
Thank you for the answer Saman Ghaffarian. it was valuable and help me to get new ideas.
  • asked a question related to Artificial Neural Networks
Question
4 answers
I have published below 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints:
1) AI++ : Artificial Intelligence Plus Plus
2) Artificial Excellence - A New Branch of Artificial Intelligence
and 18 others.
You can find 20 articles at below Elsevier SSRN pre-prints link:
Does publishing 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints equivalent to publishing 20 BREAKTHROUGH articles in ELSEVIER Journals?
Please kindly explain me your answer.
Relevant answer
Answer
Authors can share their preprint anywhere at any time. If accepted for publication, we encourage authors to link from the preprint to their formal publication via its Digital Object Identifier (DOI).
Kind Regards
Qamar Ul Islam
  • asked a question related to Artificial Neural Networks
Question
10 answers
U-net Neural Networks are similar to GAN and consist of a contracting path and an expansive path.
In this regard can we consider that U-Net a part of Generative Adversarial Networks (GAN) Architecture? Can be GANs use for Segmentation pictures similar to U-Net?
Relevant answer
Answer
"U-Net is similar to auto-encoder which can learn the latent representation and reconstruct the output with the same size as the input."- So to obtain the generation requirement, a research reported to concatenate a Gaussian variable into the latent representation to ensure that it will not generate the same image each time. The interesting article, "Generative Adversarial U-Net for Domain-free Medical Image Augmentation" link relating to the answer is given here: https://arxiv.org/pdf/2101.04793.pdf
  • asked a question related to Artificial Neural Networks
Question
4 answers
Can anyone please recommend to me some high-quality articles on utilizing artificial neural networks (ANN) to model PV/T systems? Please also recommend me some MATLAB and Python code/equations for calculating the exit temperature.
Relevant answer
Answer
Thank you so much for your support respected professor Dr. Amir Heydari.
  • asked a question related to Artificial Neural Networks
Question
8 answers
I need to calculate the accuracy, precision, recall, specificity, and F1 score for my Mask-RCNN model. Hence I hope to calculate the confusion matrix to the whole dataset first to get the TP, FP, TN, FN values. But I noticed almost all the available solutions out there for the calculation of confusion matrix, only outputs the TP, FP, and FN values. So how can I calculate metrics like accuracy and specificity which includes TN as a parameter?.Should I consider the TN as 0 during the calculations ?
Relevant answer
Answer
I am afraid that for general object detection tasks, thee is no such thing as the number of True Negatives. The definition of True Negatives for object detectors would be something like: This is the number of boxes that contain no object and that are correctly not detected. There are infinitely many such boxes and non-detections, so calculating TN here makes no sense. This includes all metrics that depend on true negatives, such as Accuracy and Specificity, and I have not come across any report that mention those numbers for object detection tasks. All the other mentioned metrics do only rely on TP, FP and FN, which are easy to compute and make sense for this task.
  • asked a question related to Artificial Neural Networks
Question
4 answers
I think that Generative Adversarial Networks can be used as Data Farming Means. What do you know about such an approach? Can you give another example of means for Data Farming?
Relevant answer
Answer
I think it's a recent technology, and recently , I had read an article about this topic, In this approach, two neural networks compete with each other to become more accurate in their predictions, I think via algorithm prediction.