Science topic
Artificial Neural Network - Science topic
Explore the latest questions and answers in Artificial Neural Network, and find Artificial Neural Network experts.
Questions related to Artificial Neural Network
What are the characteristics of the agentic artificial intelligence that is currently being rapidly developed and implemented into Internet applications?
What are the characteristics of agent-based artificial intelligence involving the rapid development of many different types of IT applications available on the Internet that function as AI agents?
Agent artificial intelligence (AI) is a technology that is characterized by its ability to autonomously make decisions and act in a specific environment, usually in a way that adapts to changing conditions. It is a system that not only performs tasks within pre-programmed rules, but is also able to respond to external stimuli, make decisions based on collected information, and learn and adapt to new challenges. Of particular importance in the context of agent-based artificial intelligence is the ability to interact with the environment, process data independently and take actions to achieve specific goals or tasks, often without the need for direct human supervision.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Please write what you think in this issue? Do you see rather threats or opportunities for labor markets related to the development of artificial intelligence technologies?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
I would like to invite you to join me in scientific cooperation,
Dariusz Prokopowicz

I have been working with artificial neural networks since 1986. It seemed to me that we were striving to reach the level of insects. And I do not quite understand some modern statements on this topic. Let us recall the known facts: 1. If we consider the human brain as an analogue of an artificial neural network, then it cannot be written into the genome: at least one and a half million genomes are needed. 2. This is such a large neural network that it cannot be taught to the limit of learning not only in one life, but also in many lives. In this regard, the question: is there a developer of so-called artificial intelligence who seriously and not for business believes that his creation will soon surpass man, and is ready to swear this on the Bible, the Koran, and the Torah?
I'm writing a course conclusion work for my college, the idea is to extract musical notes from polyphonic audio files using a artificial neural network, what would be the best method to obtain audio characteristics fast fourier transform or Mel-Frequency Cepstral Coefficients?
Does anyone know of a free Python library for machine learning that can be used on a personal computer? I am particularly interested in neural network libraries similar to FANN.
You are invited to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology so far? What are the opportunities and threats to the development of artificial intelligence technology and its applications in the future?
A SWOT analysis details the strengths and weaknesses of the past and present performance of an entity, institution, process, problem, issue, etc., as well as the opportunities and threats relating to the future performance of a particular issue in the next months, quarters or, most often, the next few or more years. Artificial intelligence technology has been conceptually known for more than half a century. However, its dynamic and technological development has occurred especially in recent years. Currently, many researchers and scientists are involved in many publications and debates undertaken at scientific symposiums and conferences and other events on various social, ethical, business, economic and other aspects concerning the development of artificial intelligence technology and eggs applications in various sectors of the economy, in various fields of potential applications implemented in companies, enterprises, financial and public institutions. Many of the determinants of impact and risks associated with the development of generative artificial intelligence technology currently under consideration may be heterogeneous, ambiguous, multifaceted, depending on the context of potential applications of the technology and the operation of other impact factors. For example, the issue of the impact of technology development on future labor markets is not a homogeneous and unambiguous problem. On the one hand, the more critical considerations of this impact mainly point to the potentially large scale of loss of employment for many people employed in various jobs in a situation where it turns out to be cheaper and more convenient for businesses to hire highly sophisticated robots equipped with generative artificial intelligence instead of humans for various reasons. However, on the other hand, some experts analyzing the ongoing impact of AI technology applications on labor markets give more optimistic visions of the future, pointing out that in the future of the next few years, artificial intelligence will not largely deprive people of work only this work will change, it will support employed workers in the effective implementation of work, it will significantly increase the productivity of work carried out by people using specific solutions of generative artificial intelligence technology at work and, in addition, labor markets will also change in other ways, ie. through the emergence of new types of professions and occupations realized by people, professions and occupations arising from the development of AI technology applications. In this way, the development of AI applications may generate both opportunities and threats in the future, and in the same application field, the same development area of a company or enterprise, the same economic sector, etc. Arguably, these kinds of dual scenarios of the potential development of AI technology and its applications in the future, different scenarios made up of positive and negative aspects, can be considered for many other factors of influence on the said development or for different fields of application of this technology. For example, the application of artificial intelligence in the field of new online media, including social media sites, is already generating both positive and negative aspects. Positive aspects include the use of AI technology in online marketing carried out on social media, among others. On the other hand, the negative aspects of the applications available on the Internet using AI solutions include the generation of fake news and disinformation by untrustworthy, unethical Internet users. In addition to this, the use of AI technology to control an autonomous vehicle or to develop a recipe for a new drug for particularly life-threatening human diseases. On the one hand, this technology can be of great help to humans, but what happens when certain mistakes are made that result in a life-threatening car accident or the emergence after a certain period of time of particularly dangerous side effects of the new drug. Will the payment of compensation by the insurance company solve the problem? To whom will responsibility be shifted for such possible errors and their particularly negative effects, which we cannot completely exclude at present? So what other examples can you give of ambiguous in the consequences of artificial intelligence applications? what are the opportunities and risks of past applications of generative artificial intelligence technology vs. what are the opportunities and risks of its future potential applications? These considerations can be extended if, in this kind of SWOT analysis, we take into account not only generative artificial intelligence, its past and prospective development, including its growing number of applications, but when we also take into account the so-called general, general artificial intelligence that may arise in the future. General, general artificial intelligence, if built by technology companies, will be capable of self-improvement and with its capabilities for intelligent, multi-criteria, autonomous processing of large sets of data and information will in many respects surpass the intellectual capacity of humans.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
I invite you to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology to date? What are the opportunities and threats to the development of AI technology and its applications in the future?
What are the strengths, weaknesses, opportunities and threats to the development of artificial intelligence technology and its applications?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

DI-60 is an automated digital cell morphology system that uses artificial neural network technology to locate, identify, and pre-classify WBCs and pre-characterize RBCs. It comprises an automated microscope that scans PBS, a digital camera captures the images of all cellular and particulate matter on the slide, and a computer that classifies each inagebusing a complex algorithm.
For ANN, i already go through matlab software but I have no idea how to use this and from where I get this software. Also Suggest any other software name that are available for statically cross check RSM.
I'm trying to use reinforcement learning with live EEG measurements. However, just 2000 measurements/iterations take 16.6 minutes to measure and it seems I need at least 10 hours of live measurements before some kind of usable results.
Can you recommend ways to reduce the number of measurements and optimization iterations needed in reinforcement learning?
I have tried to keep neural network as small as possible so there are less parameters to learn.
The question of whether machines will become more intelligent than humans is a common one, with the definition of intelligence being key to the comparison. Computers have several advantages over humans, such as better memories, faster data gathering, continuous work without sleep, no mathematical errors, and better multitasking and planning capabilities. However, most AI systems are specialized for very specific applications, while humans can use imagination and intuition when approaching new tasks in new situations. Intelligence can also be defined in other ways, such as the possession of a group of traits, including the ability to reason, represent knowledge, plan, learn, and communicate. Many AI systems possess some of these traits, but no system has yet acquired them all.
Scholars have designed tests to determine if an AI system has human-level intelligence, such as the Turing Test. The term "singularity" is sometimes used to describe a situation in which an AI system develops agency and grows beyond human ability to control it. So far, experts continue to debate when—and whether—this is likely to occur. Some AI systems can pass this test successfully but only over short periods of time. As AI systems grow more sophisticated, they may become better at translating capabilities to different situations the way humans can, resulting in the creation of "artificial general intelligence" or "true artificial intelligence."
The history of artificial intelligence dates back to several milestones that highlight the advancement of artificial intelligence relative to human intelligence. These include the first autonomous robots developed by William G. Walter (1948-49) and the development of the Turing Test by Alan Turing (1950), which unearthed the thinking capabilities of machines. In 1951, Marvin Minsky and Dean Edmonds developed the first artificial neural network which gave birth to artificial intelligence. Artificial neural networks were first applied in Machine Learning by Arthur Samuel in 1959 and the first natural language processing program, ELIZA was developed. Artificial intelligence has since been applied in robotics, gaming, and classification.
The first AI robot, Shakey, developed in 1966, became the first intelligent robot to perceive its environment, plan routes, recover from errors, and communicate in simple English. A further advancement of AI was achieved in 1969 when an optimized backpropagation algorithm was developed by Arthur Bryson and Yu-Chi Ho, which enabled AI systems to improve on their own using their past errors. The introduction of the internet in 1991 enabled online data sharing which had a significant impact on the advancement of AI. Large companies such as IBM and Caltech subsequently developed AI controlled databases that includes millions of labeled images available for computer vision research. The publication of the AlexNet architecture is considered one of the most influential papers in computer vision.
In 2016, AI system AlphaGo, created by Google subsidiary DeepMind, defeated Go champion Lee Se-dol four matches to one. In 2018, Joy Buolamwini and Timnit Gebru published the influential report, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” demonstrating that machine-learning algorithms were prone to discrimination based on classifications such as gender and race. In 2018, Waymo’s self-driving taxi service was offered in Phoenix, Arizona. In 2020, Artificial intelligence research laboratory OpenAI announced the development of Generative Pre-Trained Transformer 3 (GPT-3), a language model capable of producing text with human-like fluency.
How do we evaluate the importance of individual features for a specific property using ML algorithms (say using GBR) and construct an optimal features set for our problem.
image taken from: 10.1038/s41467-018-05761-w
Which Machine learning algorithms suits best in the material science for the problems that aims to determine the properties and functions of existing materials. Eg. typical problem of determination of band gap of solar cell materials using ML.
The value of the average RMSE in my study is 0.523 for training data and 0.514 for testing. I have used 90% data for training and 10% for testing. However I am getting higher RMSE average, is this acceptable? and any study available to cite RMSE average optimum limit?
If man succeeds in building a general artificial intelligence, will this mean that man has become better acquainted with the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
Assuming that if man succeeds in building a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness then perhaps this will mean that man has fully learned the essence of his own intelligence and consciousness. If this happens, what will be the result? Will man first learn the essence of his own intelligence and consciousness and then build a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness, or vice versa, i.e. first a general artificial intelligence and artificial consciousness capable of self-improvement and development will be created and then thanks to the aforementioned technological progress made from the field of artificial intelligence, man will fully learn the essence of his own intelligence and consciousness. In my opinion, it is most likely that both processes will develop and implement simultaneously on a reciprocal feedback basis.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, will this mean that man has better learned the essence of his own consciousness?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Hello,
I'm writing paper and used various optimizers to train model. I changed them during training step to get out of local minimum, and I know that people do that, but I don't know how to name that technique in the paper. Does it even have a name?
It is like simulated annealing in optimization, but instead of playing with temperature (step) we change optimizers between Adam, SDG and RMSprop. I can say for sure that it gave fantastic results.
P.S. Thank you for replies but learning rate scheduling is for leaning rate changing, optimizer scheduling is for other optimizer parameters, in general it is hyperparameter tuning. What I'm asking is about switching between optimizers, not modifying their parameters.
Thanks for support,
Andrius Ambrutis
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
Artificial intelligence experts.
What are the possibilities for integrating an intelligent chatbot into web-based video conferencing platforms used to date for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, due to quarantine periods implemented in many countries, restrictions on the use of physical retail outlets, cultural services, various public places and government-imposed lockdowns of business entities operating in selected, mainly service sectors of the economy, the use of web-based videoconferencing platforms increased significantly. In addition to this, the periodic transfer of education to a remote form conducted via online video conferencing platforms has also increased the scale of ICT use in education processes. On the other hand, since the end of 2022, in connection with the release of one of the first intelligent chatbots, i.e. ChatGPT, on the Internet by the company OpenAI, there has been an acceleration in the development of artificial intelligence applications in various fields of information Internet services and also in the implementation of generative artificial intelligence technology to various aspects of business activities conducted in companies and enterprises. The tools made available on the Internet by technology companies operating in the formula of intelligent language models have been taught to converse with Internet users, with people through the use of technologies modeled on the structure of the human neuron of artificial neural networks, deep learning using knowledge bases, databases that have accumulated large amounts of data and information downloaded from many websites. Nowadays, there are opportunities to combine the above-mentioned technologies so that new applications and/or functionalities of web-based video conferencing platforms can be obtained, which are enriched with tools based on generative artificial intelligence.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of connecting an intelligent chatbot to web-based video conferencing platforms used so far for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
What are the possibilities of integrating a smart chatbot into web-based video conferencing platforms?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Is AI emotional intelligence already being developed?
Is artificial emotional intelligence already being created that can simulate human emotions and/or artificial consciousness generated by the ever-improving generative artificial intelligence technology taught human skills based on a deep learning process carried out using artificial neural networks?
At present, all the dominant and most recognizable brands of technology companies and those developing online information services either already offer their intelligent chatbots online or are working on such solutions and will soon make them available online. Based on advanced generative language models, technologies for intelligent chatbots that are taught specific "human skills" through the use of deep learning and artificial neural networks are constantly being improved. Leading technology companies are also competing to build advanced systems of general artificial intelligence, which will soon far surpass the capabilities of human intelligence, far surpass the processing capabilities that take place in the human central nervous system, in human neurons, the human brain. Some scientific institutes conducting research in the development of robotics, including androids equipped with generative artificial intelligence are striving to build autonomous, intelligent androids, which people will be able to cooperate with humans in various situations, will be able to be employed in companies and enterprises instead of humans, with which it will be possible to have discussions similar to those that humans have among themselves, and which will provide assistance to humans, will perform tasks ordered by humans, will perform difficult work for humans. In the laboratories of such scientific institutes developing this kind of intelligent robotics technology, research work is also being carried out to create artificial emotional intelligence and artificial consciousness. In order for the artificial emotional intelligence and artificial consciousness built in the future not to turn out to be just a dummy and/or simulation of human emotional intelligence and human consciousness it is first necessary to fully understand what human emotional intelligence and human consciousness are and how they work.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is an artificial emotional intelligence already being created that can simulate human emotions and/or artificial consciousness generated by the ever-improving generative artificial intelligence technology taught human skills based on a deep learning process carried out using artificial neural networks?
Is an artificial emotional intelligence that can simulate human emotions and/or artificial consciousness already being created?
Is AI emotional intelligence already being created?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

What is the future of generative artificial intelligence technology applications in finance and banking?
The banking sector is among those sectors where the implementation of new ICT, Internet and Industry 4.0/5.0 information technologies, including but not limited to the applications of generative artificial intelligence technology in finance and banking. Commercial online and mobile banking have been among the particularly fast-growing areas of banking in recent years. In addition, the SARS-CoV-2 (Covid-19) coronavirus pandemic, in conjunction with government-imposed lockdowns imposed on selected sectors of the economy, mainly service companies, and national quarantines, the development of online and mobile banking accelerated. Solutions such as contactless payments made with a smartphone developed rapidly. On the other hand, due to the acceleration of the development of online and mobile banking, the increase in the scale of payments made online, the conduct of online settlements related to the development of e-commerce, the scale of cybercriminal activity has increased since the pandemic. When the company OpenAI put its first intelligent chatbot, i.e. ChatGPT, online for Internet users in November 2022 and other Internet-based technology companies accelerated the development of analogous solutions, commercial banks saw great potential for themselves. More chatbots modeled on ChatGPT and new applications of tools based on generative artificial intelligence technology made available on the Internet quickly began to emerge. Commercial banks thus began to adapt the emerging new AI solutions to their needs on their own. The IT professionals employed by the banks thus proceeded with the processes of teaching intelligent chatbots, implementing tools based on generative AI to selected processes and activities performed permanently and repeatedly in the bank. Accordingly, AI technologies are increasingly being implemented by banks into cyber-security systems, processes for analyzing the creditworthiness of potential borrowers, improving marketing communications with bank customers, perfecting processes for automating remote telephone and Internet communications of banks' call center departments, developing market analyses carried out on Big Data Analytics platforms using large sets of data and information extracted from various bank information systems and from databases available on the Internet, online financial portals and thousands of processed posts and comments of Internet users contained in online social media pages, increasingly automated and generated in real time ba based on current large sets of information and data development of industry analysis and analysis and extrapolation into the future of market trends, etc. The scale of new applications of generative artificial intelligence technology in various areas of banking processes carried out in commercial banks is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the future of generative artificial intelligence technology applications in finance and banking?
What is the future of AI applications in finance and banking?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

What are the opportunities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the use of artificial intelligence?
In the context of the integration of business entities into the currently developing processes of green transformation of the economy, adding the issue of achieving sustainable development goals to the company's mission, implementing green technologies and eco-innovations that contribute to a decrease in the level of emissions in terms of greenhouse gas emissions, exhaust emissions and other pollutants negatively affecting the state of the environment, implementing green investments that reduce the level of energy intensity of buildings and economic processes, etc., the scale of opportunities for improving sustainable business models is also growing. The aforementioned sustainable business models are an important part of green business transformation, conducted in a company or enterprise. On the other hand, the scale of opportunities for improving sustainable business models applied to business entities can be significantly increased by implementing new ICT and Industry 4.0/5.0 information technologies into business, including but not limited to generative artificial intelligence technologies. Recently, the level and generic number of applications of generative artificial intelligence in various business fields of companies and enterprises has been growing rapidly. On the Internet, intelligent applications equipped with generative artificial intelligence technology are appearing in the open, which can be applied to the execution of complex and resource-intensive data and information processing, i.e. such activities that until recently were performed only by humans. In addition, intelligent chatbots and other intelligent applications that enable automation of the execution of complex, multi-faceted, multi-criteria tasks perform the aforementioned tasks in many times less time and with much higher efficiency compared to if the same tasks were to be performed by a human. The ability of tools equipped with generative artificial intelligence to intelligently execute the ordered command is generated by teaching it in the process of deep learning and applying advanced information systems based on artificial neural networks.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the application of artificial intelligence?
What are the possibilities for improving sustainable business models through the application of artificial intelligence?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
I have described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to its full extent?
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to their full extent?
There are different types of websites and sources of data and information on the Internet. The first Internet-accessible intelligent chatbot, i.e. ChatGPT, made available by OpenAI in November 2022, performs certain commands, solves tasks, and writes texts based on knowledge resources, data and information downloaded from the Internet, which were not fully up-to-date, as they were downloaded from selected websites and portals last in January 2022. In addition, the data and information were downloaded from many selected websites of libraries, articles, books, online indexing portals of scientific publications, etc. Thus, these were data and information selected in a certain way. In 2023, more Internet-based leading technology companies were developing and making their intelligent chatbots available on the Internet. Some of them are already based on data and information that is much more up-to-date compared to the first versions of ChatGPT made available on the Internet in open access. In November 2023, social media site X (the former Twiter) released its intelligent chatbot in the US, which reportedly works on the basis of up-to-date information entered into the site through posts, messages, tweets made by Internet users. Also in October 2023, OpenAI announced that it will create a new version of its ChatGPT, which will also draw data and knowledge from updated knowledge resources downloaded from multiple websites. As a result, rival Internet-based leading forms of technology are constantly refining the evolving designs of the intelligent chatbots they are building, which will increasingly use more and more updated data, information and knowledge resources drawn from selected websites, web pages and portals. The rapid technological advances currently taking place regarding artificial intelligence technology may in the future lead to the integration of generative artificial intelligence and general artificial intelligence developed by technology companies. Competing technology companies may strive to build advanced artificial intelligence systems that can achieve a high level of autonomy and independence from humans, which may lead to a situation of the possibility of artificial intelligence technology development slipping out of human control. Such a situation may arise when the emergence of a highly technologically advanced general artificial intelligence that achieves the possibility of self-improvement and, in addition, realizing the process of self-improvement in a manner independent of humans, i.e. self-improvement with simultaneous escape from human control. However, before this happens it is earlier that technologically advanced artificial intelligence can achieve the ability to select data and information, which it will use in the implementation of specific mandated tasks and their real-time execution using up-to-date data and online knowledge resources.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to Internet resources to their full extent?
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to the full extent?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I need to add the above published article to this account
Can the supervisory institutions of the banking system allow the generative artificial intelligence used in the lending business to make a decision on whether or not to extend credit?
Can the banking system supervisory institutions allow changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but also make the decision on whether or not to extend credit?
Generative artificial intelligence finds application in various spheres of commercial banking, including banking offered to customers remotely through online and mobile banking. In addition to improving remote channels of marketing communication and remote access of customers to their bank accounts, tools based on generative AI are being developed, used to increase the scale of efficiency, automation, intelligent processing of large sets of data and information on various processes carried out inside the bank. Increasingly, generative AI technologies learned in deep learning processes and the application of artificial neural network technologies to perform complex, multi-faceted, multi-criteria data processing on Big Data Analytics platforms, including data and information from the bank's environment, online databases, online information portals and internal information systems operating within the bank. Increasingly, generative AI technologies are being used to automate analytical processes carried out as part of the lending business, including, first and foremost, the automation of creditworthiness analysis processes, processes carried out on computerized Big Data Analytics and/or Business Intelligence platforms, in which multicriteria, intelligent processing is carried out on increasingly large sets of data and information on potential borrowers and their market, competitive, industry, business and macroeconomic environment, etc. However, still the banking system supervisory institutions do not allow changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but will also make a decision on whether to grant a loan. Banking supervisory institutions still do not allow this kind of solution or precisely it is not defined in the legal norms defining the functioning of commercial banking. This raises the question of whether the technological advances taking place and the growing scale of applications of generative artificial intelligence technology in banking will not force changes in this area of banking as well. Perhaps, the growing scale of implementation of generative AI into various spheres of banking will contribute to the continuation of the processes of automation of lending activities which may result in the future in generative artificial intelligence making a decision on whether or not to extend credit.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the supervisory institutions of the banking system authorize changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but will also make a decision on whether or not to grant a loan?
Can the supervisory institutions of the banking system allow the generative artificial intelligence used in credit activities to make the decision on whether or not to extend credit?
Will the generative artificial intelligence applied to a commercial bank soon make the decision on whether to grant credit?
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
I have described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

I want someone to help me develop an artificial neural network ANN. if you have any idea on how to develop such, please send me an email, at woadu@csir.brri.org
Do all deep learning solve the similarity of things?
Our study goal is to develop and validate questionnaire for patient satisfaction and then using this questionnaire tool to evaluate patient satisfaction then building a model to predict patient satisfaction using artificial neural network. need help to calculate sample size for this study.
I'm expecting to use stock prices from the pre-covid period up to now to build a model for stock price prediction. I doubt regarding the periods I should include for my training and test set. Do I need to consider the pre-covid period as my training set and the current covid period as my test set? or should I include the pre-covid period and a part of the current period for my training set and the rest of the current period as my test set?
Hello everyone,
How to create a neural network with numerical values as input and an image as output?
Can anyone give a hint/code for this scenario?
Thank you in advance,
Aleksandar Milicevic
I have actually 3 independent variable and 1 dependent variable i want to predict the unknown data that i need .I have done it with regression but i am facing errors in ANN.
Dear researchers,
I am writing to request your assistance in obtaining literature, research papers, or any valuable insights regarding sensitivity analysis in the artificial neural network modelling of geopolymer concrete. Furthermore, I would appreciate your providing practical recommendations or best practices for conducting sensitivity analysis in this domain. Your contribution will greatly benefit my study, and I appreciate your support.
Thank you for your time and consideration.
Hi All,
I'm working on an artificial neural network model, I got the attached results in which the regression is 0.99072 which I think is good, but not sure why there is an accumulation of data about the Zero and One as shown in the attached regression.
Any Idea, or explanation, I will be highly appreciated.



These criteria mainly worked on the computation of likelihood on the model. As per my knowledge, the likelihood is not computed for the ML models.
I want to know how to compute likelihood and these criteria for ANN.
Dear Professors/Researchers,
I would like to learn ANN process (For IC Engine related). I have seen many videos related to ANN in YouTube. Also, I read many articles related to ANN. The articles provide only results. But I can't understand how the input/output parameters are assigned/simulated in the ANN (For the IC Engine related). If possible, Kindly provide and guide with any reference data file or videos. Thanks in advance.
Dear Colleagues/Researchers,
Can you recommend any papers/research on using PINNs (Physics Informed Neural Networks) to solve direct (and potentially) inverse eigenvalue/spectral problems?
One related work is this: https://www.sciencedirect.com/science/article/abs/pii/S1007570421003531
Any further suggestions or projects (potentially including code)?
Thanks.
How to use a genetic algorithm for optimizing the Artificial Neural Network model in R.
Hello
i need an advice with channel equalizer based on neural network. i have data that represent the received signals at different channel conditions and the target output and the channel response without the signal and i want the loss function minimize the error between the target and received at each condition. the problem is that i don't know how to train my network what is the input. i'm using MLP type which is similar to feed forward network i'll appreciate if anyone can help
I want to know about CHAT GPT. What type of data we can collect using Chat GPT and how to use it.
Thank you.
I am searching for some algorithms for feature extraction from images which I want to classify using machine learning. I have heard only about SIFT, I have images of buildings and flowers to classify. Other than SIFT, what are some good algorithms.
I'm quite new in GMDH and based on my first reading on this technique I feel like I want to know more. Here are some of the benefits of using GMDH approach:
1.The optimal complexity of model structure is found, adequate to level of noise in data sample. For real problems solution with noised or short data, simplified forecasting models are more accurate.
2.The number of layers and neurons in hidden layers, model structure and other optimal NN parameters are determined automatically.
3.It guarantees that the most accurate or unbiased models will be found - method doesn't miss the best solution during sorting of all variants (in given class of functions).
4.As input variables are used any non-linear functions or features, which can influence the output variable.
5.It automatically finds interpretable relationships in data and selects effective input variables.
6. GMDH sorting algorithms are rather simple for programming.
7. TMNN neural nets are used to increase the accuracy of another modelling algorithms.
8. Method uses information directly from data sample and minimizes influence of apriori author assumptions about results of modeling.
9. Approach gives possibility to find unbiased physical model of object (law or clusterization) - one and the same for future samples.
It seems that items 1,2,6 and 7 are really interesting and can be extend to ANN.
Any suggestion or experience from others?
I want to solve some nonlinear optimisation problems like minimising/maximizing f(x)=x^2+sinx*y + 1/xy under the solution space 3<=x<=7, 4<=y<=11 using artificial neural network. Is it possible to solve it?
Hello i would like someone to tell me how to test trained artificial neural network in matlab for linear predictions.
I am currently busy with training and testing data for my neural network model (predicting solar radiation), but while training my correlation coefficient (R) is 0.6 average. I tried multiple things but R won't be higher. I am using neural network data manager in matlab, with 10 neurons, 1 layer, tansig function in both hidden and output layer. I have 6 inputs and 1 output. Inputs are respectively: average temperature, average pressure, average relative humidity, latitude, longitude and altitude (all of them are normalized between 0-1)
I want to improve my knowledge of the Hopfield networks.
I have 7 variables as inputs (4 of them are dummy) and 1 variable as output. I would like to forecast the output variable with neural network. Is there any way to choose the most relevant variables for the input of the network? Or I can select all of them as input and build/train the network?
I wanna design ANN controller for load frequency control of interconnected power systems. Who can help me?
Matlab code for ANN and Matlab simulink is ready but my problem is, I don't know exactly how to do it.
My problem:
1.What would be the dataset for input(s) and output? What would be my input and output source?
2.How can I connect the Simulink with ANN code?I mean how I can run this Simulink with the output of ANN matlab code?
some documents like Matlab file can help me a lot.
Thanks
I want to know about the simplest m Artificial Neural Network that can be used for the classification of network traffics to normal and attack in java using KDD cup 99. Can the classification read the KDD records as is or does they need to be normalized?
This ISCX is a benchmark intrusion detection dataset with contains 7 days of synthetically recorded packet details replicating the real time network traffic by labelling the attacks. I would like to use a neural classifier to import this data and classify them for DDOS. The problems are very big, how can I resolve them?
How to get the values of parameters of pimf from dataset?
Like pimf(x, [a b c d]) or another pimf(x, C, lambda), C is centre and lambda>0 is the scaling factor. How can I calculate the value of lambda?
The selection of input variables is critical in order to find the optimal function in ANNs. Studies have been pointing numerous algorithms for input variable selection (IVS). They are generally classified in three different groups: (1) dimension reduction, (2) variable selection, and (3) filter. Each group holds several algorithms with specific assumptions and limitations.
If a researcher decides to use ANN, he might be happy to know...
1) Which approach is the most recommended to select ANN input variables?
2) What are the advantages and drawbacks of your choice in regard to other strategies?
3) Is the algorithm implemented in any statistical package (R or other free ones are more approachable)?
I have four input vectors of (1*36) matrix each and one output vector (1*36). I want to develop a coding using ANN which not only process and train the network but displays how processing is going on i.e. calculated value of mse and R at each epoch. At last the code should generate a mathematical expression of how output vector is related to input vector.Network is simple curve fitting network given in tutorials. Any sort of help will be appreciated.
Does anyone have any suggestions for free code (R or Matlab) to use WNN for time series analysis and forecasting?
I am working on binary images and want to compare them pixel to pixel using clonalg selection algorithm.
Can you suggest to me a software for relay coordination except ETAP,PSCAD
In designing classifiers (using ANNs, SVM, etc.), models are developed on a training set. But how to divide a dataset into training and test sets? With few training data, our parameter estimates will have greater variance, whereas with few test data, our performance statistic will have greater variance. What is the compromise? From application or total number of exemplars in the dataset, we usually split the dataset into training (60 to 80%) and testing (40 to 20%) without any principled reason. What is the best way to divide our dataset into training and test sets?
How we can calculate the threshold in PCA algorithm for face recognition if we use euclidean distance and ppm images?
What is the value if this variable to know this is known or unknown face?
I wonder as there is no paper mentioning the value!
I want to know the procedure to apply ANN for FET in steps. Waiting for your suggestions or Article. Thanks in Advance.
Can anyone help me regarding classification of proteins with a artificial neural network?
In Neuro-fuzzy systems, usually we use ANN to learn how to design an FIS (If I'm not right, please correct me first). It is a help of ANN for a better performance of FIS.
I wanna know do you have any experience or know any literature to use fuzzy inference systems (FIS) to help a better performance of ANN?
I have to visualize and interpret what the hidden neurons are learning.My network is 784X64X10. I am trying it by visualizing the RBM (Restricted Boltzmann Machine) weights learnt after 1 epoch.Is my approach correct? Please guide me if I am wrong.
I am creating a 'feedforwardnet' and using 'mapstd' to normalize the input
[pn,ps]=mapstd(training_data);
[tn,ts]=mapstd(target_data);
now I want to calculate various error functions RMSE, correlation coeff R, model efficiency MEnash etc. So how can I find the output data of the trained network corresponding to target data?
In multi-sensory fusion (like in the ventriloquist effect), it is broadly accepted that human integrate the various modalities by weighing them depending of the variance (reliability more precisely) of the distribution of the corresponding modality (there is a large amount of literature on this subject). Most models I found for modelling this effect are Bayesian ones which assume to know in advance the different variances (which seems to me very implausible from a developmental perspective). Do you know any work trying to learn these variances from stimuli (preferably in a developmental perspective) or proposing an hypothesis on which mechanism this may be based?
Thanks.
The use of the recurrent neural networks in control systems becomes more and more interesting subject, but that is based on the training method that we use , in control we need our training to be on-line and this is convenient for the applications that we are trying to work on , so what are the learning approaches in the dynamic neural nets " recurrent " and which ones are the best among them ?
Is anyone aware of algorithms and theoretical work about hyperparameter optimization methods especially for online learning algorithms?
I have the following scenario in mind.
Lets assume I, e.g., use a Passive Aggressive 1 algorithm, which incorporates a hyperparameter "c" which specifies the aggressivenes of the updates.
Usually, I would use grid search/cross validation combination to find the optimal "c" hyperparameter for a specific dataset.
However, for online learning I might also use prequential evaluation combined with a set of classifiers with different hyperparameters, and choose the one with the best accuracy/lowest loss/... on all previous examples/a window of examples/... I might also introduce a discount factor for the performance.
Is there any theoretical work or a thorough evaluation existing?
What are other possible approaches?
I recently interested in study neural network ensemble. I would like to peruse these network and its structure with more details, and does it have a special algorithm? Thanks.
I know that Artificial Neural Network is a very suitable method for predicting the compressive strength of concrete. I wonder if there is better way than this method for predicting this parameter?
I would like to explore and analysis classification related data set, using artificial neural network. What is the powerful tool and fast and effective way to learning and analyzing the classification data sets? If their is any good tutorial regarding learning the ANN towards basic and data analysis, please advice me.
I want to know details about methods to identify cancer using neural networks (for example: Using Scan image , Using Some samples). Please., suggest me some articles to find out how to identify cancer using Artificial Neural Networks.
I am working with a project of early detection of cascading collapse in power system during steady state condition using neural network approach. However, I have a problem with finding a data for the neural network. Can anyone help me? Or any advice for me? I would like to use ieee 9/14 bus as my model.
I am using kdd data set. And want to label all the features using a umatrix or is there any other option to do that. Is there any function in matlab to do that?
i've already constructed the architecture of the neural network, and it returns only the target, but i don't know how to predict future values
Are uncertainty analysis and sensitivity necessary to be done for data before using it in neural network modeling ?
I am asking the community here to get a brief overview if there exist a method or algorithm, that enables me to transform a traditional perceptron based feed forward artificial neural network into its open equation form?
The closed for of an equation for a linear system for example is
y = kx + d and its open form is R = y - kx - d
Any idea or hint is appreciated.
Kind regards and thank you in advance.
Artificial Neural Network and rule-based systems are types of knowledge-based system. So, can we consider ANFIS (Adaptive Neuro-Fuzzy Inference Systems) as knowledge-based systems?
Hello Everyone, I am working on data prediction using ANN method. I need to know, is there any specific rules/references/ mathematical formulas to understand the minimum number of observations required to apply ANN techniques to predict a data. Note that, I have 80 observations of soil properties (14 variables) to predict SPT value. Can I get any Idea?
I am trying to carry out classification using artificial neural network in R software using Landsat images. I have read lots of articles but most use MATLAB. Can anyone point me to the right direction that can show me how I can do this?
There are many intelligent models developed in computer engineering, including fuzzy logic, genetic algorithm, artificial neural network, and more recently, Adaptive Neuro-Fuzzy Inference System . My question is: to what extent are these tools applied in petroleum engineering? and more specifically, how much familiar are petroleum engineers with these tools?
please explain your answer in simple wordings.
I trained my neural network and computed MAD,MAPE and RMSE for different number of hidden nodes. My question is that the coefficient of determination ( R square) is negative for all hidden number of neurons that i tested. The code is :
set.seed(1234)
R<-function(h.size){
net<-nnet(mande ~ bed + bes + ATM + salery.day.effect+ off.day.effect+ week.dat+ work.day, data=train.data, size=h.size, decay=5e-4, maxit = 500, entropy = TRUE, Hess = TRUE)
ydi<- test.data$mande
yi<- predict(net,test.data)
ym<-mean(test.data$mande)
R<-1- ((sum((yi-ydi)^2))/(sum((ydi-ym)^2)))
c(h.size, R)
}
R<-t(sapply(2:20, FUN=R))
the output is:
[,1] [,2]
[1,] 2 -0.07756268
[2,] 3 -1.45965479
[3,] 4 -1.64501223
[4,] 5 -0.79321810
[5,] 6 -0.54140203
[6,] 7 -0.67961167
[7,] 8 -0.83259432
[8,] 9 -0.77899111
[9,] 10 -0.57165255
[10,] 11 -0.76664634
[11,] 12 -0.88438609
[12,] 13 -0.76306553
[13,] 14 -0.57160636
[14,] 15 -0.56628827
[15,] 16 -0.70073809
[16,] 17 -0.71454018
[17,] 18 -0.71002645
[18,] 19 -0.66231219
[19,] 20 -0.53359629
i did right? why all R square's are negative? what should i do ? Thanks
Imagine that you have several emotions as predictors (input layers) of a certain output (lets say... wellbeing). You create an ANN and the best architecture contains 5 hidden nodes.
To determine the contribution of the individual emotions, apart from sensitivity analysis, can we also measure the value of the weights of the connections between them and the hidden nodes?
And if the contribution of a given emotion to the most important hidden nodes is inhibitory, can we assume that the relation of that emotion and the outcome is negative?
TIA
I would like to save the best network for further use and want to know when it is fully trained.
Thanks in advance for your replies.
What are some of the latest artificial neural networks techniques for stock prediction?
I am using this method to predict the stock market.
I just want to know who has first used the artificial neural networks (ANN) to detect and/or locate faults in electrical networks
In an artificial neural network, which data normalization method is normally used? I found four types of normalization:
1. Statistical or Z- core normalization
2. Median normalization
3. Sigmoid normalization
Which normalization is best?
Based on the cyber-physical system described in article http://palensky.org/pdf/Palensky2013.pdf
I suggest an ANN but it requires an accurate data set for its training, any other solution or suggestion for modelling cyber-physical system?
I am confused with these two terminologies. Is there any difference between multilabel output and multiple outputs in the case of artificial neural networks? Please reply with some easy examples.
Thanks in advance
i need to develop anew VmAllocationPolicy
i need to know where i test it with VmAllocationPolicyBio
the environment where i run my work
i can test in built in examples ?? what is the workloads i use it ???