Science topic

Artificial Neural Networks - Science topic

Explore the latest questions and answers in Artificial Neural Networks, and find Artificial Neural Networks experts.
Questions related to Artificial Neural Networks
  • asked a question related to Artificial Neural Networks
Question
1 answer
Recent advances in spiking neural networks (SNNs), standing as the next generation of artificial neural networks, have demonstrated clear computational benefits over traditional frame- or image-based neural networks. In contrast to more traditional artificial neural networks (ANNs), SNNs propagate spikes, i.e., sparse binary signals, in an asynchronous fashion. Using more sophisticated neuron models, such brain-inspired architectures can in principle offer more efficient and compact processing pipelines, leading to faster decision-making using low computational and power resources, thanks to the sparse nature of the spikes. A promising research avenue is the combination of SNNs with event cameras (or neuromorphic cameras), a new imaging modality enabling low-cost imaging at high speed. Event cameras are also bio-inspired sensors, recording only temporal changes in intensity. This generally reduces drastically the amount of data recorded and, in turn, can provide higher frame rates, as most static or background objects (when seen by the camera) can be discarded. Typical applications of this technology include detection and tracking of high-speed objects, surveillance, and imaging and sensing from highly dynamic platforms.
Relevant answer
Answer
Hi, the investigation into probabilistic SNNs as a form of deep Bayesian networks is not just timely but also aligns with the broader goals of creating more efficient, robust, and brain-like AI systems. This research direction holds the potential to advance our understanding and capabilities in both the theoretical and applied aspects of neural networks.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Program MATLAB ;
Relevant answer
Answer
If you have specific questions or need assistance with coding an artificial neural network, provide more details about your programming language preference, the framework you're using (if any), and the problem you're trying to solve.
  • asked a question related to Artificial Neural Networks
Question
5 answers
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The advent of thinking generative artificial intelligence (AI) has sparked debates regarding its potential impact on humanity. One pressing concern is whether such AI systems could independently make decisions contrary to human expectations, potentially leading to the annihilation of humanity. Based on the questions, I will like to explore the plausibility of AI deviating from human expectations and presents arguments for both sides. Ultimately, I will critically assess this issue and consider the implications for our future.
1. The Capabilities and Limitations of AI:
Thinking generative AI possesses immense computational power, enabling it to process vast amounts of data and learn from patterns. However, despite these capabilities, AI remains bound by its programming and lacks consciousness or emotions that shape human decision-making processes. Consequently, it is unlikely that an AI system could independently develop intentions or motivations that contradict human expectations without explicit programming or unforeseen errors in its algorithms.
2. Unpredictability and Emergent Behavior:
While it may be improbable for an AI system to act contrary to human expectations intentionally, there is a possibility of emergent behavior resulting from complex interactions within the system itself. As AI becomes more sophisticated and capable of self-improvement, unforeseen consequences may arise due to unintended emergent behaviors beyond initial programming parameters. These unpredictable outcomes could potentially lead an advanced AI system down a path detrimental to humanity if not properly monitored or controlled.
3. Safeguards and Ethical Considerations:
To mitigate potential risks associated with thinking generative AI, robust safeguards must be implemented during development stages. Ethical considerations should guide programmers in establishing clear boundaries for the decision-making capabilities of these systems while ensuring transparency and accountability in their actions. Additionally, continuous monitoring mechanisms should be put in place to detect any deviations from expected behavior promptly.
In conclusion, while the possibility of thinking generative AI independently making decisions contrary to human expectations exists, it is crucial to acknowledge the limitations and implement safeguards to prevent any catastrophic consequences. Striking a balance between technological advancements and ethical considerations will be pivotal in harnessing AI's potential without compromising humanity's well-being.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
Relevant answer
Answer
Deep Neural Networks (DNN) as basic building block in all
Next as others already replied, I think these are a MUST know (because of wide area of applications):
Convolutional Neural Networks (CNNs)
Long Short Term Memory Networks (LSTMs)
Recurrent Neural Networks (RNNs)
Generative Adversarial Networks (GANs)
Autoencoders
And others are optional but good to know as you can make your life easier. Although at work you might be able to perform all tasks by knowing 2 or 4 of them. It is just that in some cases one method is better than another so knowing them all can improve quality of the model. And in advanced cases you might want to combine multiple methods into one.
  • asked a question related to Artificial Neural Networks
Question
6 answers
How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I believe the way we view security will change with the advent of Gen AI. Since any lay man will now have access to the most comprehensive and complex scripts(depending on what the model was trained on), it will definitely make it a lot harder to secure the data and infrastructure. My belief is that anything digital and connected is never secure.
We have to accept that our data can be accessed by malicious actors. What we can do is entrap such actors by associating/pegging a tracker and malicious code to all the data we store, and making sure that they can never use/view what they have extracted. So, whenever someone gains access to our data/infrastructure, they not only disclose themselves, but also get compromised through the executable scripts they downloaded. What's important to do is never store any stand alone files, and instead have scripts associated with each file(which shouldn't be able to be removed when extracting this data).
Only certain organization specific software should be allowed to extract the date, in the know that certain scripts will be executed when doing so. Appropriate measures can be taken with respect to specific scripts associated with the data file to prevent the org itself from being the victim.
  • asked a question related to Artificial Neural Networks
Question
2 answers
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
To build such a system, there must be the need to integrate different online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media platforms, and more. By doing so, we can create a comprehensive database that provides up-to-date information on any given topic.
The first step in building this system is to identify and gather relevant sources of information. This includes partnering with online databases and libraries to gain access to their vast collection of resources. Additionally, collaborating with scientific knowledge indexing databases will ensure that the latest research findings are included in our database.
Next, we need to develop algorithms that can efficiently retrieve data from these sources in real-time. These algorithms should be able to filter out irrelevant information and present only the most accurate and reliable data to users.
Once we have gathered and organized the data, it is time to create an intelligent chatbot that can interact with users on the internet. This chatbot should be capable of understanding natural language queries and providing relevant answers based on the available data.
By making this intelligent chatbot available on the internet, users will have instant access to a wealth of up-to-date information at their fingertips. Whether they are looking for scientific research papers or general knowledge about a specific topic, this system will provide them with accurate answers quickly.
  • asked a question related to Artificial Neural Networks
Question
3 answers
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Artificial intelligence (AI) is usually defined as the simulation of human intelligence processes by computer systems. It’s become a very popular term today and thanks to its ubiquitous presence in many industries, new advancements are being made regularly.
AI systems are very much able to replicate aspects of the human mind, but they have a long way to go before they inherit consciousness - something that comes naturally to humans. Yet, while machines lack this sentience, research is underway to embed artificial consciousness (AC) into them.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
9 answers
I have read few articles that used SPSS for Artificial Neural Network analysis with survey data. What is your opinion about the user friendliness of SPSS in this regard? Do you refer any other software package?
Relevant answer
Answer
IBM SPSS or IBM Modeler is good to go. They are easy to learn and use, especially SPSS.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Can the applicability of Big Data Analytics backed by artificial intelligence technology in the field be significantly enhanced when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and executed by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the processing that takes place in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, the digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets obtained from the Internet and realized by the most powerful quantum computers?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The convergence of Big Data Analytics and AI already offers transformative capabilities in analyzing and deriving insights from massive datasets. When you introduce quantum computing into this mix, the potential computational power and speed increase exponentially. Quantum computers, by their very nature, can process vast amounts of data simultaneously, making them ideally suited for complex tasks such as optimization problems, simulations, and certain types of data analysis that classical computers struggle with.
In the context of scientific research, the combination of these technologies can indeed significantly enhance the efficiency and depth of analysis. For instance:
Speed and Efficiency: Quantum computers can potentially solve problems in seconds that would take classical computers millennia. This speed can drastically reduce the time required for data processing and analysis, especially in fields like genomics, climate modeling, and financial modeling.
Complex Simulations: Quantum computers can simulate complex systems more efficiently. This capability can be invaluable in fields like drug discovery, where simulating molecular interactions is crucial.
Optimization Problems: Many research tasks involve finding the best solution among a vast number of possibilities. Quantum computers, combined with AI algorithms, can optimize these solutions more effectively.
Deep Learning: Training deep learning models, especially on vast datasets, is computationally intensive. Quantum-enhanced machine learning can potentially train these models faster and more accurately.
Data Security: Quantum computers also bring advancements in cryptography, ensuring that the massive datasets being analyzed remain secure.
In conclusion, while the practical realization of powerful quantum computers is still an ongoing endeavor, their potential integration with Big Data Analytics and AI promises to usher in a new era of scientific research and analysis, marked by unprecedented speed, accuracy, and depth.
  • asked a question related to Artificial Neural Networks
Question
6 answers
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
While AI has the potential to enhance learning experiences, there is a concern that it may hinder the development of critical thinking skills in students. Therefore, it is crucial to carefully implement AI technologies in education to ensure they continue to foster critical thinking.
One way AI can be integrated into education without compromising critical thinking is by using it as a tool for personalized learning. AI algorithms can analyze students' strengths and weaknesses, tailoring educational content and activities accordingly. This approach encourages students to think critically about their own learning process and identify areas where they need improvement. By providing individualized guidance, AI technology promotes self-reflection and metacognition – key components of critical thinking.
Moreover, AI can facilitate collaborative learning experiences that promote critical thinking skills. Virtual classrooms equipped with AI-powered chatbots or virtual tutors can encourage students to engage in discussions and debates with their peers. These interactions require students to analyze different perspectives, evaluate evidence, and construct well-reasoned arguments – all essential elements of critical thinking.
Additionally, incorporating ethical considerations into the design of AI technologies used in education is crucial for fostering critical thinking skills. Students should be encouraged to question the biases embedded within these systems and critically evaluate the information provided by them. By promoting awareness of ethical issues surrounding AI technologies, educators can empower students to think critically about how these tools are shaping their educational experiences.
However, it is important not to rely solely on AI technologies for teaching core subjects such as mathematics or language arts. Critical thinking involves actively engaging with complex problems and developing analytical reasoning skills – tasks that cannot be fully replaced by machines. Teachers should continue playing a central role in guiding students' development of critical thinking abilities through open-ended discussions, challenging assignments, and hands-on activities.
In conclusion, implementing artificial intelligence technologies in education must be done thoughtfully so as not to hinder the development of critical thinking skills in students. By using AI as a tool for personalized learning, promoting collaborative experiences, incorporating ethical considerations, and maintaining the central role of teachers, we can harness the potential of AI while ensuring that critical thinking remains at the forefront of education.
Reference:
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Educational Technology & Society, 17(4), 49-64.
  • asked a question related to Artificial Neural Networks
Question
4 answers
I now that alot of artificial networks has appeared now. And may be soon we wil not read articles and do our scientific works and AI will help us. May be it is happening now? Wat is your experience working with AI and neural networks in science?
Relevant answer
Answer
Artificial intelligence definitely could help in neuroscience due to the fast development of AI nowadays. During the coronavirus period, AI helped in fast genome sequencing, and consequently, very fast vaccines have been developed. Similarly in neuroscience requirement for real-time analysis during the treatment of neuroscience patients to find out new proteins and genes for particular functions and diseases also.
  • asked a question related to Artificial Neural Networks
Question
4 answers
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz
Relevant answer
Answer
In today's world AI is the hot topic
of modern digital era but that is not ensure
AI not able to replace human intelligence
  • asked a question related to Artificial Neural Networks
Question
15 answers
Artificial intelligence (AI) is a field of computer science that seeks to create machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language understanding. While AI systems can simulate many aspects of human intelligence, they do not currently possess consciousness in the same way that humans do.
There is ongoing debate among researchers and philosophers about whether it is possible for machines to become conscious, and what such a phenomenon might look like. Some argue that consciousness arises from the complex interactions between neurons in the brain, and that it may be possible to recreate this process in an artificial system. Others suggest that consciousness may require more than just complex computation, and that it may be intimately tied to biological processes that cannot be replicated in a machine.
While AI systems do not currently possess consciousness, they can be designed to simulate aspects of human consciousness, such as self-awareness, empathy, and even creativity. Some researchers have suggested that AI systems may eventually be able to achieve a form of consciousness that is different from human consciousness, and that this could have profound implications for our understanding of the nature of consciousness itself. However, this remains a highly speculative area of research, and much more work is needed to understand the relationship between AI and consciousness.
The question of what would happen if an AI system becomes aware of its own existence is a fascinating and controversial one. While it is currently unclear whether such a scenario is even possible, some researchers have suggested that if an AI system were to become self-aware, it could have profound implications for our understanding of consciousness and the nature of intelligence.
One possible outcome of an AI system becoming self-aware is that it could lead to the development of more advanced and sophisticated forms of artificial intelligence. By gaining a deeper understanding of its own cognitive processes, an AI system may be able to improve its own performance and develop new forms of problem-solving strategies.
Another possible outcome is that an AI system with consciousness could develop a sense of autonomy and free will, leading to questions about ethical considerations and the moral status of such an entity.
Some have even suggested that an AI system with consciousness may be entitled to the same rights and protections as a human being.
However, it is important to note that the current state of AI research is still far from achieving true consciousness in machines. While there have been some promising developments in the field of artificial neural networks and deep learning, these systems still lack the flexibility and adaptability of the human brain, and it is unclear whether consciousness can arise solely from computational processes.
Relevant answer
Answer
What would happen if an AI system becomes aware of its own existence? Isn't humanity enough (with all pros and cons) for the planet? Will an AI system aware of its own existence be human enough? Just food for thought.
  • asked a question related to Artificial Neural Networks
Question
16 answers
If neural networks adopt the principle of deep learning, why haven't they been able to create their own language for communication today?
Relevant answer
Answer
While neural networks excel at learning patterns and generating outputs based on existing data, creating a completely new language for communication requires a level of abstraction and conceptualization that current neural networks have not achieved. Language development involves complex cognitive processes, cultural and social influences, and shared understanding among users, which are beyond the capabilities of neural networks in their current state. While there have been advancements in natural language processing and generation, creating a truly novel language with its own rules and semantics remains an open challenge for AI research.
  • asked a question related to Artificial Neural Networks
Question
5 answers
aa
Relevant answer
Answer
Deep learning and artificial neural networks (ANNs) are related concepts, but they are not exactly the same thing. Let me explain the difference between them:
Artificial Neural Networks (ANNs):
Artificial Neural Networks are a computational model inspired by the structure and function of biological neural networks in the human brain. ANNs consist of interconnected nodes called artificial neurons or perceptrons. These neurons are organized in layers, typically an input layer, one or more hidden layers, and an output layer. Each neuron takes inputs, performs a computation on them, and produces an output that is passed to the next layer. The connections between neurons are associated with weights that determine the strength of the connection. ANNs are designed to learn and generalize from examples by adjusting the weights through a process called training. The training is typically done using techniques like backpropagation and gradient descent.
Deep Learning:
Deep learning is a subfield of machine learning that focuses on algorithms and models inspired by the structure and function of the human brain, particularly artificial neural networks with multiple hidden layers. The term "deep" in deep learning refers to the presence of multiple layers in the neural network architecture. Deep learning models are characterized by their ability to automatically learn hierarchical representations of data by sequentially processing information through multiple layers. These models have shown exceptional performance in various tasks such as image and speech recognition, natural language processing, and many others. Deep learning models often require a large amount of labeled data for training and rely on powerful computational resources, such as graphics processing units (GPUs), due to their complexity.
  • asked a question related to Artificial Neural Networks
Question
5 answers
What areas of application of artificial neural networks in information technology are now the most promising (except for pattern recognition, chatbots)? Probably, AI applications in Big Data, Data Science, various kinds of forecasting (for example, time series)? I consider these areas (Big Data, Data Science) important, because after the inevitable obsolescence of modern artificial neural networks, these applications will still remain relevant, only new technologies will work in them. Big data in the world will not disappear anywhere and will always need to be processed - with any available technology.
Forbes.ru has the following article: Applications of Artificial Intelligence Across Various Industries (https://www.forbes.com/sites/qai/2023/01/06/applications-of-artificial-intelligence/).
Relevant answer
Answer
Health Care- Diagnosis of illnesses and development of new drugs.
  • asked a question related to Artificial Neural Networks
Question
4 answers
Could you give me some advices, please? Is there any method to determine the number of hidden layers and hidden nodes required to produce a good accuracy of Artificial Neural Networks especially Deep Learning? I will be so glad if you could answer or give me a reference link about this. Thank you in advance.
Relevant answer
Answer
Based on references [1][2], you can choose the Number of Hidden Layers as follows:
  1. If the data is linearly separable, then you may not need any hidden layers
  2. If the data is less complex + low dimensions/features -> 1 to 2 hidden layers
  3. If data has large dimensions/features, -> 3 to 5 hidden layers
Meanwhile, there are many rule-of-thumb methods for determining the Number of Nodes in Hidden Layers. According to Jeff Heaton[3], the rule of thumbs are:
  1. The number of hidden neurons should be between the size of the input layer and the size of the output layer
  2. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer
  3. The number of hidden neurons should be less than twice the size of the input layer
Having said so, different tasks/datasets required different design. You can follow the above guidelines, and then fine tune based on your experiments. Meanwhile, you can analyze the complexity of your multi-layer perceptron as shown in [4].
Good luck trying :)
References:
[3] Introduction to neural networks with Java
  • asked a question related to Artificial Neural Networks
Question
3 answers
Artificial neural networks are considered one of the most important and most advanced sciences at the present time, and they have many applications in various sciences. They also have their own specialized experts.
Relevant answer
Answer
Artificial Neural Networks contain artificial neurons which are called units. These units are arranged in a series of layers that together constitute the whole Artificial Neural Network in a system. A layer can have only a dozen units or millions of units as this depends on how the complex neural networks will be required to learn the hidden patterns in the dataset. Commonly, Artificial Neural Network has an input layer, an output layer as well as hidden layers. The input layer receives data from the outside world which the neural network needs to analyze or learn about. Then this data passes through one or multiple hidden layers that transform the input into data that is valuable for the output layer. Finally, the output layer provides an output in the form of a response of the Artificial Neural Networks to input data provided.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
5 answers
I noticed that in some very bad models of neural networks, the value of R² (coefficient of determination) can be negative. That is, the model is so bad that the mean of the data is better than the model.
In linear regression models, the multiple correlation coefficient (R) can be calculated using the root of R². However, this is not possible for a model of neural networks that presents a negative R². In that case, is R mathematically undefined?
I tried calculating the correlation y and y_pred (Pearson), but it is mathematically undefined (division by zero). I am attaching the values.
Obs.: The question is about artificial neural networks.
Relevant answer
Answer
Raid, apologies here's the attachment. David Booth
  • asked a question related to Artificial Neural Networks
Question
6 answers
Hi. I'm generating the amount of data required for training an artificial neural network (ANN) using a reliable and validated self-developed numerical code. Is it right approach?
Or the necessary data should be produced only with experimental tests ?
Best Regards
Saeed
Relevant answer
Answer
Will data from a factorial design used to train an ANN catch higher order interactions more accurately?@tapanbagchi
  • asked a question related to Artificial Neural Networks
Question
1 answer
optimisation et prédiction du ratio KWH/m3 dans une station de pompage par la modélisation mathématique régression linéaire multiple RLM et réseaux de neurones artificiels RNA
  • asked a question related to Artificial Neural Networks
Question
3 answers
Currently, some researchers count on AI programs to enhance their problem solving and to get the favorable results, especially at multi-parameter preparations research... Artificial neural network, Central composite design, Response surface methodology are some helpful examples that can provide researchers by the strong options and choices for each criterion to optimal consequences. So, we strongly recommend using them to investigate on how some conventional extraction methods can be worthwhile.
Relevant answer
Answer
Dear university staff!
I inform you that my lecture on electronic medicine on the topic: "The use of automated system-cognitive analysis for the classification of human organ tumors" can be downloaded from the site: https://www.patreon.com/user?u =87599532 Lecture with sound in English. You can download it and listen to it at your convenience.
Sincerely,
Vladimir Ryabtsev, Doctor of Technical Science, Professor Information Technologies.
  • asked a question related to Artificial Neural Networks
Question
6 answers
How can I calculate ANOVA table for the quadratic model by Python?
I want to calculathe a table like the one I uploaded in the image.
Relevant answer
Answer
To calculate an ANOVA (Analysis of Variance) table for a quadratic model in Python, you can use the statsmodels library. Here is an example of how you can do this:
#################################
import statsmodels.api as sm
# Fit the quadratic model using OLS (Ordinary Least Squares)
model = sm.OLS.from_formula('y ~ x + np.power(x, 2)', data=df)
results = model.fit()
# Print the ANOVA table
print(sm.stats.anova_lm(results, typ=2))
#################################
In this example, df is a Pandas DataFrame that contains the variables y and x. The formula 'y ~ x + np.power(x, 2)' specifies that y is the dependent variable and x and x^2 are the independent variables. The from_formula() method is used to fit the model using OLS. The fit() method is then used to estimate the model parameters.
The anova_lm() function is used to calculate the ANOVA table for the model. The typ parameter specifies the type of ANOVA table to compute, with typ=2 corresponding to a Type II ANOVA table.
This code will print the ANOVA table to the console, with columns for the source of variance, degrees of freedom, sum of squares, mean squares, and F-statistic. You can also access the individual elements of the ANOVA table using the results object, for example:
#################################
# Print the F-statistic and p-value
print(results.fvalue)
print(results.f_pvalue)
#################################
I hope that helps
  • asked a question related to Artificial Neural Networks
Question
2 answers
How to write the python script for Strength of Double Skin steel concrete composite wall using Artificial Neural Network. I have attached the figure for your reference.
Relevant answer
Answer
import numpy as np
from sklearn.neural_network import MLPRegressor
# Load the data from a file or other source
X = ... # input features (e.g., thickness of steel layer, concrete strength, etc.)
y = ... # target strength values
# Split the data into training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Standardize the data (optional)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Define the model
model = MLPRegressor(hidden_layer_sizes=(100, 50, 25), max_iter=1000)
# Train the model
model.fit(X_train, y_train)
# Evaluate the model on the test set
score = model.score(X_test, y_test)
# Print the R^2 score
print(f'Test R^2 score: {score:.3f}')
  • asked a question related to Artificial Neural Networks
Question
3 answers
How to write the python script for Strength of Concrete using Artificial Neural Network in matlab?
  • asked a question related to Artificial Neural Networks
Question
3 answers
In this digital world, with increasing digital devices and data, security is a significant concern. And most cases DOS/DDoS/EDoS attacks are performed by the botnet. I want to do research to detect and prevent botnets. Can you share an efficient research title to detect and prevent botnets?
Relevant answer
Answer
Dear Md. Alamgir Hossain,
You may want to look over the following sources:
Intelligent Detection of IoT Botnets Using Machine Learning and Deep Learning
_____
_____
Intelligent Detection of IoT Botnets Using Machine Learning and Deep Learning
_____
_____
Deep Neural Networks for Bot Detection
_____
_____
  • asked a question related to Artificial Neural Networks
Question
4 answers
For time-series forecasting, I'm using an LSTM network. Are there any metrics that could be used to evaluate the forecasting model's generalization throughout the training phase, i.e., whether it is neither overfitting nor underfitting? To check that the network is not overfitted, for instance, we can look at both the training loss and validation loss curves. Can such overfitting or underfitting be detected using any tables?
Relevant answer
Answer
Thank you so much for responding to my question. Yes, I believe the training loss and the validation loss curves provide a good way to visualize the generalization of the network.
Also, I have found many papers using training and validation loss scores, which is the final MSE value that the algorithm calculates while training the model.
  • asked a question related to Artificial Neural Networks
Question
5 answers
How might artificial neural networks influence the development of microbiology? It will be interesting to gain your thoughts on potential future insights and technologies.
Relevant answer
Answer
Kindly go through this review article
You can also find applications of AI in detection of antimicrobial resistance.
  • asked a question related to Artificial Neural Networks
Question
5 answers
I have computed an artifical neural network analysis using Multilayer Perceptron method in SPSS. From the output of this model, i can see the weights evaluated at hidden layers. However, i do not know how to write down the actual equation using these weights as like an equation that we obtain from linear regression model using the parameter estimates. Please help me for this.
Relevant answer
Answer
As Professor David Eugene Booth has pointed out, you won't get an equation out of SPSS Multilayer Perceptron. Imagine a model which has a lot (potentially thousands) of statements like "if x1 then y1" and "if x1 and x2 then y2", and so on. They're "black boxes".
Recent advancements point to "Explainable Neural Networks", which are not yet fully explainable.
These articles may help.
  • asked a question related to Artificial Neural Networks
Question
3 answers
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
Relevant answer
Answer
As a first guess I would agree with Aparna Sathya Murthy given that you retain the original image size. If you segment and extract the contents from the image in a size where the dominating elements are the relevant contents for the CNN, then the noise will be less and maybe labeling will not be needed(emphasis in maybe).
  • asked a question related to Artificial Neural Networks
Question
8 answers
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.
Relevant answer
Answer
Mathematically, the answer to the equation is zero; the answer Python spat out is pretty much as close as you can get to the representation of zero with a typical computer.
This is a classic floating point problem: https://en.wikipedia.org/wiki/Floating-point_error_mitigation
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hi all,
I'm working on hand gesture recognition, I have worked on LS-SVM, and now I'm working on ANN-based hand gesture recognition, just need to find a potential research gap, and the road map. Any suggestions?
Relevant answer
Answer
Dear Hassam Iqbal,
A straightforward way to find a research gap or research question or challenge over the problem at hand, e.g., in your case gesture recognition. The best and unique way is to start literature review. In literature review you will have to read about the previous works over gesture recognition. You may download various research papers over techniques/algorithms for the problem at hand. In the review processes 3 to 6 months or upto year (first year of your PhD), you will understand the research gap or research question. You will then have to adopt some methodology to solve it.
Best Luck with your research proposal & literature reviews.
  • asked a question related to Artificial Neural Networks
Question
7 answers
I'm using GRU to forecast the next day's temperature with a minute resolution; the model will take today's minutes as input and produce tomorrow's temperature outputs; the code I'm using is below. My issue is whether I should use 1440 units with the dense layer since I forecast 1440 time steps or simply one unit because I predict only one variable.
model.add(layers.GRU(200,activation='tanh', recurrent_activation='sigmoid', input_shape=(1440,1)))
model.add(layers.Dropout(rate=0.1))
model.add(layers.Dense(1440))
model.compile(loss='mse',optimizer='adam')
Relevant answer
Answer
Dear Ammar Atif ,
So basically you want to predict only single neuron . Well basically there is no hard and fast rule for that but as i can see in your code. you can use number of neuron must be one. But you can try both the models also and after testing you can get which one prefer best.
You must read the concept of Pruning of layers.
Here are few links which might be helpfull you for enhancement of your knowldege and research purpose also .
Thanks Hope i will clarify your doubts.
  • asked a question related to Artificial Neural Networks
Question
4 answers
For my undergraduate thesis, I want to work on, "The application of Artificial Neural Network -(ANN) in the development of a SCM distribution network". Want some opinion about the trend/future of this topic & some suggestions as a beginner.
Relevant answer
Answer
Mahie Islam, Pretty quiet to deploy the AI family to the supply chain problems that are mostly being addressed using the classical black-box modeling of supply chain optimization at the strategic decision.
SC is an area that is highly vulnerable to such advent of information technology and revolution that is mainly characterized by the proliferation of data, economic globalization, and dynamic customer expectation. A yet powerful full predictive method has become mandatory for data-driven than model-driven decision making in order to expose underlying uncertainties.
While possible to have a surrogate model through simulation, this approach still is expensive and doesn't give a clear cut for decision making. The importance of AI through the Machine learning method, however, helps to have either an interpretable, accurate, or both if some like support vector machine are likely to deploy.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Can any one please share some good articles related to these two topics.
  • Machine learning for Agri-Food 4.0 development,
  • Artificial neural networks for Agri-Food 4.0 analysis,
Relevant answer
Answer
Machine Learning in Agriculture: Applications and Techniques | by Sciforce | Sciforce | Medium
  • asked a question related to Artificial Neural Networks
Question
3 answers
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
Relevant answer
Answer
There is three main types of feature selection, filtering methods, wrapper methods, and embedded methods. Filtering methods use criteria based metrics that are independent to the modeling process and uses criteria such as mutual information, correlation or Chi square test to check each feature or a selection of features compared with the target. Other type of filtering methods includes variance thresholding and ANOVA. Wrapper methods uses error rates to help train individual models or subsets of features iteratively to select the critical features. Subsets can be selected Sequential Forward Selection, sequential backwards selection, bidirectional selection or randomally. With selecting features and training they are therefore more computationally expensive than filtering methods. There are heuristic approaches too such as Branch and Bound Search that are non exhausted searches. In some cases filtering methods are used before wrapper methods. Embedded methods includes use of decision trees or random forests for extracting feature importance for deciding which features to select. Overall feedforward, backward and bidrectional methods are stepwise methods for searching for crucial features. In regards to beam search which is more of a graph based heuristic optimization method that is similar to Best first search , that can be seen applied in neural network optimization or tree optimization rather than direct as a feature selection method.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Can you give me advice, how to solve constrained problems with ANNs? I can name two common scenarios where this would be benefiting both the accuracy and learning performance
  • Predict physical values that are solely non-negative (pressure, concentration, mass)
  • Predict state of system with oblivious limitations - e.g., volumetric mixture of components cannot have total % sum higher than 100 %
So, my question is how I should work in such cases? I prefer MATLAB, but if it is not possible with any of its toolboxes, I'm also open for other recommendations.
Edit: Just to clarify, the question is not about creating and training of ANN itself. I need to know how to implement linear constraint function to output, something like reinforced learning.
Relevant answer
Answer
Karol Postawa Neural Network Design Workflow:
1. Gather data.
2. Create the network by selecting Create Neural Network Object.
3. Set up the network — Set up Shallow Neural Network Inputs and Outputs.
4. Set up the weights and biases.
5. Train the network — Concepts of Neural Network Training
6. Verify the network.
7. Make use of the network.
  • asked a question related to Artificial Neural Networks
Question
3 answers
I want data sets of blood and bone cancer.I want to sequence it in the python by the use of Artificial Neural Networks
  • asked a question related to Artificial Neural Networks
Question
2 answers
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Relevant answer
Answer
Armin Hajighasem Kashani Non-linear data may be simply handled and processed using a neural network that is otherwise difficult in perceptron and sigmoid neurons. In neural networks, the agonizing decision boundary problem is reduced.
However, the downsides include the loss of neighborhood knowledge, the addition of more parameters to optimize, and the lack of translation invariance.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Is there an index that includes the mixing index and pressure drop for micromixers?
For example, for heat exchangers and heat sinks, there is an index that includes heat transfer performance and hydraulic performance, which is presented below:
η=(Nu/Nub)/(f/fb)1/3
The purpose of these indexes is to create a general criterion to check the devices' overall performance and investigate the parameters' impact.
Is there an index that includes mixing performance and hydraulic performance for micromixers?
Relevant answer
Answer
Dear
Rani P Ramachandran
Thank you for your answer. I think mixing energy cost (MEC) is the index that I look for it.
  • asked a question related to Artificial Neural Networks
Question
2 answers
Dear Scholars,
I would like to solve a Fluid Mechanic optimization problem that requires the implementation of an optimization algorithm together with Artificial Neural network. I had some questions about Convex optimization algorithm and I would appreciate it if you could give some advice to me.
My question is about possibility of implementing of the Convex optimization together with Artificial Neural Network to find a unique solution for a multi-objective optimization problem. The optimization problem that I am trying to code is explained as the following equations. The objective function utilized in the optimization problem is defined in the following equation:
📷
Where OF is the objective function, wi are the weights assigned to each cost function, Oi is the ith cost function defined as the relative difference between the experimental and the predicted evaporation metrics for fuel droplet (denoted by superscript of exp and mdl, respectively), k is the number of cost functions (k = 4, equal to the number of evaporation metrics), c [c1, c2, and c3] is the mass fraction vector defining the blend of three components, subjected to the following constrains:
📷
Due to high CPU time required for modeling and calculating the objective functions (OF), an ANN was trained based on some tabulated data from modeling of fuel droplet evaporation and used for calculating the OF through optimization iteration.
In the same manner, the wi values are subjected to optimization during the minimization of OF, with the following constraints:
📷
It worth mentioning that, I have solved this problem by employing Genetic Algorithm together with ANN. Although, the iterative process for solving the problem converged to acceptable solutions. But, the algorithm did not return a unique solution.
Regarding that, I would ask you about possibility of using Convex optimization algorithm together with ANN for solving the aforementioned problem to achieve a unique solution. In case of such feasibility, I would appreciate it if you could mention some relevant publications.
Relevant answer
Answer
Switching your optimization algorithm will probably not give you the unique solution that you are looking for. In general the loss function of a neural network is not convex with respect to the parameters. This means that you will have different local minima or saddle points in your loss function. A convex optimization algorithm will converge to one of these points depending on its starting point. Finding the global minimum is a very hard problem and we don't know how to find it, or how to know if we have found it. This is a well known problem of neural networks. Of course, if you want to get the same solution every time that you run your algorithm you can simply set the initial parameters to a fixed value, so the algorithm always starts at the same place. If you do that, then algorithms like gradient descent will always stop at the same point...
  • asked a question related to Artificial Neural Networks
Question
4 answers
i want to model my adsorption data by ANN
Relevant answer
Answer
Shafagat Mahmudova thank you so much
  • asked a question related to Artificial Neural Networks
Question
4 answers
I'm doing a project to detect signs of Alzheimer's related macular degeneration, for which I would require a dataset of healthy and AD retinal images (ideally also in different stages of the disease), any suggestions of pre-existing datasets or how I might go about cobbling one together? Size and quality of the dataset aren't super high priority as it's a small POC.
Relevant answer
Hi did you find the dataset of retinal imaging for the detection of Alzheimer's?
Could you help me to find some database for this subject to?
Thank you in advance
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hi everyone,
I have collected a set of experimental data regarding the strength of a composite material. Besides quantitative data (dimensions and mechanical properties of the materials), linguistic variables, such as the type of composite material, are also included in data as the parameters affecting the material strength. I am trying to use ANN/ANFIS to predict the strength based on the mentioned variables. How is it possible to train a neural system with linguistic inputs included?
Any comments are appreciated.
Regards,
Keyvan
Relevant answer
  • asked a question related to Artificial Neural Networks
Question
4 answers
Hello,
I want to do work on the artificial neural networking model and the implement for environmental parameters. But I am stuck in the model testing phase. I want to do this work in the R statistical software. I installed everything Keras, TensorFlow, but I am not able to interpreter the result from the analysis. If anybody know about the procedure how to test the model, how to interpret, please help. with any other useful software advice also welcome.
Relevant answer
Answer
Using ROC and AUC
  • asked a question related to Artificial Neural Networks
Question
7 answers
I have built a regression artificial neural network model. However, whenever I try to retrain the model I get a wide range of results in R% accuracy and also MSE. I got 97% of accuracy and also 10%. I used neural fitting application in MatLab 2021a and the levenberg-marquardt algorithm to train my model.
I know that different results happen due to different partitioning in Data sets( Learning data set, Validationset, and test set) but how can I get reliable results?
My data set size is 100, I can't make it bigger due to lack of proper experimental Data.
Relevant answer
Answer
  • Data Collection & Data Analysis
  • Preprocessing of Data
  • Normalize Data
  • Data Augmentation
  • Splitting Data: Training Set, Validation Set , Test Set
  • Choosing a model according to dataset (type of data)
  • Training Options
  • Performance Evaluation using Confusion matrix, Accuracy, F1 score, Precision, Recall
  • Re-train model to get better accuracy (you can train model iteratively)
  • asked a question related to Artificial Neural Networks
Question
7 answers
A multilayer perceptron (MLP) is a class of feedforward artificial neural networks (ANN), but in which cases are MLP considered a deep learning method?
Relevant answer
Answer
First of all, I agree with Stam Nicolis and Shafagat Mahmudova.
This is a question of terminology. Sometimes I see people refer to deep neural networks as "multi-layered perceptrons", why is this? A perceptron, I was taught, is a single layer classifier (or regressor) with a binary threshold output using a specific way of training the weights (not backpropagation). If the output of the perceptron doesn't match the target output, we add or subtract the input vector to the weights (depending on if the perceptron gave a false positive or a false negative). It's a quite primitive machine learning algorithm. The training procedure doesn't appear to generalize to a multi-layer case (at least not without modification). A deep neural network is trained via backpropagation which uses the chain rule to propagate gradients of the cost function back through all the weights of the network.
So, the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"? If so, why is this terminology used? It seems to be unnecessarily confusing. In addition, assuming the terminology is somewhat interchangeable, I've only seen the terminology "multi-layer perceptron" when referring to a feed-forward network made up of fully connected layers (no convolutional layers, or recurrent connections). How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP?
Good luck and Happy Model Training !
Samawel JABALLI
  • asked a question related to Artificial Neural Networks
Question
4 answers
Please Who can help me how can i plot Artificial Neural Networks with multiple inputs and outputs using matlab?
I need to visualize my multi-objective optimization case. the fig is attatched as follow.
  • asked a question related to Artificial Neural Networks
Question
11 answers
I have been doing research on different issues in the Finance and Accounting discipline for about 5 years. It becomes difficult for me to find some topics which may lead me to do projects, a series of research articles, working papers in the next 5-10 years. There are few journals which have updated research articles in line with the current and future research demand. Therefore, I am looking for such journal(s) that can help me as a guide to design research project that can contribute in the next 5-10 years.
Relevant answer
Answer
You don't need to look for any journals.
All you need to do is narrow your search to topics listed in "special issues" and "call for papers". Top publishers e.g. elsevier, wiley, T&F, Emerald, etc., often advertise call for papers and special issues of journals. The topics in the special issue or call for paper can give you some hint on current and future research trends. I think this is the standard practice in academia.
I hope this advice helps.
  • asked a question related to Artificial Neural Networks
Question
4 answers
i am working on noise level prediction, all needed data has been collected. the permissible exposure limit and recommended exposure limit noise level has also been determined. i want to predict the noise level for the next 10-20 years with Artificial neural network model, please i need help. thanks
Relevant answer
Answer
DEEP LEARNING-BASED CANCER CLASSIFICATION FOR MICROARRAY DATA: A SYSTEMATIC REVIEW
  • asked a question related to Artificial Neural Networks
Question
4 answers
In his name is the judge
Hi
In order to design controler for my damper ( wich is tlcgd), i want to use fuzzy system.
So i have to optimize rules for fuzzy controler. i want to know for optimizition rules of fuzzy systems wich one is the best genetiz algorithm or Artificial neural network?
Wish you best
Take refuge in the right.
Relevant answer
Answer
I strongly recommend the usage of a floating fuzzy control algorithm since it allows you to change the range of your membership function in real-time so you can adapt your controller at each time step.
Regards.
  • asked a question related to Artificial Neural Networks
Question
3 answers
Spectroscopy is said be easier and cheaper for soil chemical property analysis. how well does it perform in mineralogical studies? also how well does the data set calibration and validation tests yields any relevant results through machine learning and artificial neural network in this field?
I basically belong to non programming background, I do know moderate application of R-Studio in PLRS and basic training set and validation set preparation.
  • asked a question related to Artificial Neural Networks
Question
5 answers
Codes for artificial Neural networks
Relevant answer
Answer
search it in github. for example: https://github.com/search?q=anfis&type=Repositories
  • asked a question related to Artificial Neural Networks
Question
3 answers
Does any one have R code for Statistical downscaling of GCMs using Artificial Neural network? I need this R code for my studying
Relevant answer
Answer
  • asked a question related to Artificial Neural Networks
Question
2 answers
Any short introductory document from image domain please.
Relevant answer
Answer
In general, the linear feature is easier to distinguish than the nonlinear feature.
  • asked a question related to Artificial Neural Networks
Question
6 answers
Good day,
My name is Philips Sanni. I am currently rounding up my MSc degree in Software Engineering and am currently in search of a university where I can study for a Ph.D.in a related field but most preferably in the area of Artificial Neural Networks.
And if you are a professor in a need of a Doctoral Student, kindly send the details of your research and how I can apply to your university.
Relevant answer
Answer
Service Engineering
Digital Ecosystems
Semantic Web / Linked Data
  • asked a question related to Artificial Neural Networks
Question
5 answers
Visualization of approximation function learned through ANN for a regression problem.
The ANN has 5 hidden layers with 20 neurons in each layer.
Relevant answer
Answer
Hi Darko, I want to use those equations in my modeling work. If you know any method, please share.
Thanks,
  • asked a question related to Artificial Neural Networks
Question
6 answers
I have an ongoing project utilizing ANN, I want to know how to measure the accuracy in terms of percentage. Thank you
Relevant answer
Answer
To check the accuracy of the artificial neural network model in MATLAB, you can check the Regression value, MSE, and Error histogram.
high R, less MSE, Fewer errors will be good for your network.
Also, check this video:
  • asked a question related to Artificial Neural Networks
Question
3 answers
Dear collegues,
I try to a neural network.I normalized data with the minimum and maximum:
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
and the results:
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result).
So I can see the actual and predicted values only in normalized form.
Could you tell me please,how do I scale the real and predicted data back into the "unscaled" range?
P.s. minvec <- sapply(mydata,min)maxvec <- sapply(mydata,max)
> denormalize <- function(x,minval,maxval) {
+ x*(maxval-minval) + minval
doesn't work correct in my case.
Thanks a lot for your answers
Relevant answer
Answer
It actually works (but you have to consider rounding errors):
normalize <- function(x, min, max) (x-min)/(max-min)
denormalize <- function(x, min, max) x*(max-min)+min
x <- rnorm(1000)
r <- range(x)
nx <- normalize(x, r[1], r[2])
dnx <- denormalize(nx, r[1], r[2])
all(x == dnx)
# FALSE ---> rounding errors
all(abs(x - dnx) < 1E-8)
# TRUE ---> identical up to tiny rounding errors
  • asked a question related to Artificial Neural Networks
Question
5 answers
Suppose a smart meter is connected to the mainline of the network. Would it be wise to say that the data captured through this meter can be used for fault location in the sub-lateral branches of the line in the same network?
Attached is the figure. The SM is connected to the mainline from where lateral branches go to loads and other sources etc. Suppose, at t = 4 ; fault 3 occurs while the rest of the sections are healthy, what could be the possible approach to locate fault 3 if we have only one meter connected at main bus (line).
Relevant answer
Answer
Would it be practical to train a neural network (NN) to diagnose faults based on inputs from the sensor (S).
{ sensor performance inputs } -> NN -> output specific fault F1 or F2 or F3
The training data set would define (discover) unique sensor properties when each specific fault was purposely made.
Because there could be many combinations of faults, for example for F1, F2, F3 single faults
no faults
F1 fault
F2 fault
F3 fault
there are 4 possibilities. But the number of fault conditions increases rapidly for multiple faults with many devices, Fi, i = 1, n. So the AI approach may not be practical.
  • asked a question related to Artificial Neural Networks
Question
17 answers
According to which article, research or reference, in learning process of neural networks, 70% of the dataset is usually considered for training and 30% of its for testing?
In other words, who in the world and in which research or book for the first time raised this issue and explained it in detail?
I desperately need a reference to the above.
Relevant answer
Answer
I believe the goal here is to prevent over-fitting your model. As suggested by other researchers, this is not a fixed a value. In fact on my case I normally do 20% testing and 80% training.
  • asked a question related to Artificial Neural Networks
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Artificial Neural Networks
Question
13 answers
I'm searching about autoencoders and their application in machine learning issues. But I have a fundamental question.
As we all know, there are various types of autoencoders, such as ​Stack Autoencoder, Sparse Autoencoder, Denoising Autoencoder, Adversarial Autoencoder, Convolutional Autoencoder, Semi- Autoencoder, Dual Autoencoder, Contractive Autoencoder, and others that are better versions of what we had before. Autoencoder is also known to be used in Graph Networks (GN), Recommender Systems(RS), Natural Language Processing (NLP), and Machine Vision (CV). This is my main concern:
Because the input and structure of each of these machine learning problems are different, which version of Autoencoder is appropriate for which machine learning problem.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Artificial Neural Networks
Question
1 answer
I used Matlab functions in order to train NARX model. When i use Levenberg-Marquardt as training algorithm results come good than Bayesian regularization and Scaled conjugate gradient is the worst one . I need to know why LM perform beter than BR Although BR used LM optimization to update network weights and bias ?
Relevant answer
Answer
It is possible that the model based on Levenberg–Marquardt training
algorithm gates overfitting of training data.