Science topic

Artificial Neural Network - Science topic

Explore the latest questions and answers in Artificial Neural Network, and find Artificial Neural Network experts.
Questions related to Artificial Neural Network
  • asked a question related to Artificial Neural Network
Question
3 answers
What are the characteristics of the agentic artificial intelligence that is currently being rapidly developed and implemented into Internet applications?
What are the characteristics of agent-based artificial intelligence involving the rapid development of many different types of IT applications available on the Internet that function as AI agents?
Agent artificial intelligence (AI) is a technology that is characterized by its ability to autonomously make decisions and act in a specific environment, usually in a way that adapts to changing conditions. It is a system that not only performs tasks within pre-programmed rules, but is also able to respond to external stimuli, make decisions based on collected information, and learn and adapt to new challenges. Of particular importance in the context of agent-based artificial intelligence is the ability to interact with the environment, process data independently and take actions to achieve specific goals or tasks, often without the need for direct human supervision.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Please write what you think in this issue? Do you see rather threats or opportunities for labor markets related to the development of artificial intelligence technologies?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
I would like to invite you to join me in scientific cooperation,
Dariusz Prokopowicz
Relevant answer
Answer
Agentic AI is at the forefront of technological innovation, enhancing efficiency and capability across a wide range of internet applications, from virtual assistants and customer service chatbots to complex data analysis tools and autonomous systems. Agentic artificial intelligence refers to AI systems that exhibit autonomy and decision-making capabilities, allowing them to operate independently in various contexts. AI can perform tasks without constant human supervision. They can make decisions based on the data available and defined objectives. Many applications leverage advanced NLP capabilities, enabling these systems to engage in meaningful conversations, interpret user intent, and generate human-like responses. As agentic AI systems become more integrated into societal functions, developers Dariusz Prokopowicz are increasingly focused on addressing ethical concerns, such as bias mitigation, transparency in decision-making, and ensuring fairness.
  • asked a question related to Artificial Neural Network
Question
11 answers
I have been working with artificial neural networks since 1986. It seemed to me that we were striving to reach the level of insects. And I do not quite understand some modern statements on this topic. Let us recall the known facts: 1. If we consider the human brain as an analogue of an artificial neural network, then it cannot be written into the genome: at least one and a half million genomes are needed. 2. This is such a large neural network that it cannot be taught to the limit of learning not only in one life, but also in many lives. In this regard, the question: is there a developer of so-called artificial intelligence who seriously and not for business believes that his creation will soon surpass man, and is ready to swear this on the Bible, the Koran, and the Torah?
Relevant answer
Answer
Apparently, no one is going to swear on the Bible, the Koran and the Torah. That they believe in the superiority of artificial intelligence. But if you find one: it will be really interesting, in terms of worldview.
Due to the ambiguity in this topic: it would be more convenient for me to use either the old name "artificial neural networks" or the name "artificial insects" instead of the frightening name "artificial intelligence". After all, we are talking about the maximum of the same number of artificial neurons as in insects. Yes, they speak. And bees have a language. They draw: and something in a caterpillar allows you to create a complex pattern on a butterfly's wing. Ants and bees: they are also engaged in architecture.
Artificial insects can be dangerous, like some natural ones. I wonder: "locusts with iron faces" from the Apocalypse: is this not a prediction of drones with a neural network? If earlier an official could feel ashamed: now in his place there will be a robot who does not suffer from anything like that. But mostly these are toys.
  • asked a question related to Artificial Neural Network
Question
2 answers
I'm writing a course conclusion work for my college, the idea is to extract musical notes from polyphonic audio files using a artificial neural network, what would be the best method to obtain audio characteristics fast fourier transform or Mel-Frequency Cepstral Coefficients?
Relevant answer
Answer
The greatest compression of musical information is achieved in *.midi format. If this does not scare you, try to understand how it is done. For example, here:
Or simplify the task to your own capabilities ;)
  • asked a question related to Artificial Neural Network
Question
1 answer
Does anyone know of a free Python library for machine learning that can be used on a personal computer? I am particularly interested in neural network libraries similar to FANN.
Relevant answer
Answer
I would advise looking into PyTorch, which is free to use (BSD 3 license). It works well with the majority of PC hardware, has a lot of community support, and learning it develops coding skills that can be transferred to other very popular packages, like transformers.
  • asked a question related to Artificial Neural Network
Question
2 answers
You are invited to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology so far? What are the opportunities and threats to the development of artificial intelligence technology and its applications in the future?
A SWOT analysis details the strengths and weaknesses of the past and present performance of an entity, institution, process, problem, issue, etc., as well as the opportunities and threats relating to the future performance of a particular issue in the next months, quarters or, most often, the next few or more years. Artificial intelligence technology has been conceptually known for more than half a century. However, its dynamic and technological development has occurred especially in recent years. Currently, many researchers and scientists are involved in many publications and debates undertaken at scientific symposiums and conferences and other events on various social, ethical, business, economic and other aspects concerning the development of artificial intelligence technology and eggs applications in various sectors of the economy, in various fields of potential applications implemented in companies, enterprises, financial and public institutions. Many of the determinants of impact and risks associated with the development of generative artificial intelligence technology currently under consideration may be heterogeneous, ambiguous, multifaceted, depending on the context of potential applications of the technology and the operation of other impact factors. For example, the issue of the impact of technology development on future labor markets is not a homogeneous and unambiguous problem. On the one hand, the more critical considerations of this impact mainly point to the potentially large scale of loss of employment for many people employed in various jobs in a situation where it turns out to be cheaper and more convenient for businesses to hire highly sophisticated robots equipped with generative artificial intelligence instead of humans for various reasons. However, on the other hand, some experts analyzing the ongoing impact of AI technology applications on labor markets give more optimistic visions of the future, pointing out that in the future of the next few years, artificial intelligence will not largely deprive people of work only this work will change, it will support employed workers in the effective implementation of work, it will significantly increase the productivity of work carried out by people using specific solutions of generative artificial intelligence technology at work and, in addition, labor markets will also change in other ways, ie. through the emergence of new types of professions and occupations realized by people, professions and occupations arising from the development of AI technology applications. In this way, the development of AI applications may generate both opportunities and threats in the future, and in the same application field, the same development area of a company or enterprise, the same economic sector, etc. Arguably, these kinds of dual scenarios of the potential development of AI technology and its applications in the future, different scenarios made up of positive and negative aspects, can be considered for many other factors of influence on the said development or for different fields of application of this technology. For example, the application of artificial intelligence in the field of new online media, including social media sites, is already generating both positive and negative aspects. Positive aspects include the use of AI technology in online marketing carried out on social media, among others. On the other hand, the negative aspects of the applications available on the Internet using AI solutions include the generation of fake news and disinformation by untrustworthy, unethical Internet users. In addition to this, the use of AI technology to control an autonomous vehicle or to develop a recipe for a new drug for particularly life-threatening human diseases. On the one hand, this technology can be of great help to humans, but what happens when certain mistakes are made that result in a life-threatening car accident or the emergence after a certain period of time of particularly dangerous side effects of the new drug. Will the payment of compensation by the insurance company solve the problem? To whom will responsibility be shifted for such possible errors and their particularly negative effects, which we cannot completely exclude at present? So what other examples can you give of ambiguous in the consequences of artificial intelligence applications? what are the opportunities and risks of past applications of generative artificial intelligence technology vs. what are the opportunities and risks of its future potential applications? These considerations can be extended if, in this kind of SWOT analysis, we take into account not only generative artificial intelligence, its past and prospective development, including its growing number of applications, but when we also take into account the so-called general, general artificial intelligence that may arise in the future. General, general artificial intelligence, if built by technology companies, will be capable of self-improvement and with its capabilities for intelligent, multi-criteria, autonomous processing of large sets of data and information will in many respects surpass the intellectual capacity of humans.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
I invite you to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology to date? What are the opportunities and threats to the development of AI technology and its applications in the future?
What are the strengths, weaknesses, opportunities and threats to the development of artificial intelligence technology and its applications?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Given the dynamic development of generative artificial intelligence technology and its applications in recent years, I would like to address a question to Dear Researchers and Scientists: In your opinion, what are the strengths, weaknesses, opportunities and threats to the development of artificial intelligence technology and its applications? Please respond based on your thoughts, considerations, research, autopsy, experience of using applications equipped with AI technology. I don't mean the unreflective generation of answers in the so-called intelligent chatbot only your opinion, your opinion on this topic.
I am conducting research in this issue. I have described the key issues of opportunities and threats to the development of artificial intelligence technology and the results of my research in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
What do you think about this?
What is your opinion on this topic?
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Artificial Neural Network
Question
4 answers
DI-60 is an automated digital cell morphology system that uses artificial neural network technology to locate, identify, and pre-classify WBCs and pre-characterize RBCs. It comprises an automated microscope that scans PBS, a digital camera captures the images of all cellular and particulate matter on the slide, and a computer that classifies each inagebusing a complex algorithm.
Relevant answer
The systemic of the DI-60 device can be considered an excellent tool for identifying blood components by its digital camera and automated microscope. Best Wishes
  • asked a question related to Artificial Neural Network
Question
5 answers
For ANN, i already go through matlab software but I have no idea how to use this and from where I get this software. Also Suggest any other software name that are available for statically cross check RSM.
Relevant answer
Answer
You may find huge raw data for it at www.rawdatalibrary.net
  • asked a question related to Artificial Neural Network
Question
3 answers
I'm trying to use reinforcement learning with live EEG measurements. However, just 2000 measurements/iterations take 16.6 minutes to measure and it seems I need at least 10 hours of live measurements before some kind of usable results.
Can you recommend ways to reduce the number of measurements and optimization iterations needed in reinforcement learning?
I have tried to keep neural network as small as possible so there are less parameters to learn.
Relevant answer
Answer
You can use the active learning method to improve data efficiency in reinforcement learning. This technique involves querying the model for uncertainty estimates and selectively collecting data in areas where predictions are uncertain. By doing this, you can decrease the reliance on random exploration. However, t is important to consider that the optimal strategy will vary depending on your specific EEG task, available resources, and computational limitations. Therefore, it is recommended to experiment with different approaches in order to find the right balance between efficiency and performance in RL.
  • asked a question related to Artificial Neural Network
Question
14 answers
The question of whether machines will become more intelligent than humans is a common one, with the definition of intelligence being key to the comparison. Computers have several advantages over humans, such as better memories, faster data gathering, continuous work without sleep, no mathematical errors, and better multitasking and planning capabilities. However, most AI systems are specialized for very specific applications, while humans can use imagination and intuition when approaching new tasks in new situations. Intelligence can also be defined in other ways, such as the possession of a group of traits, including the ability to reason, represent knowledge, plan, learn, and communicate. Many AI systems possess some of these traits, but no system has yet acquired them all.
Scholars have designed tests to determine if an AI system has human-level intelligence, such as the Turing Test. The term "singularity" is sometimes used to describe a situation in which an AI system develops agency and grows beyond human ability to control it. So far, experts continue to debate when—and whether—this is likely to occur. Some AI systems can pass this test successfully but only over short periods of time. As AI systems grow more sophisticated, they may become better at translating capabilities to different situations the way humans can, resulting in the creation of "artificial general intelligence" or "true artificial intelligence."
The history of artificial intelligence dates back to several milestones that highlight the advancement of artificial intelligence relative to human intelligence. These include the first autonomous robots developed by William G. Walter (1948-49) and the development of the Turing Test by Alan Turing (1950), which unearthed the thinking capabilities of machines. In 1951, Marvin Minsky and Dean Edmonds developed the first artificial neural network which gave birth to artificial intelligence. Artificial neural networks were first applied in Machine Learning by Arthur Samuel in 1959 and the first natural language processing program, ELIZA was developed. Artificial intelligence has since been applied in robotics, gaming, and classification.
The first AI robot, Shakey, developed in 1966, became the first intelligent robot to perceive its environment, plan routes, recover from errors, and communicate in simple English. A further advancement of AI was achieved in 1969 when an optimized backpropagation algorithm was developed by Arthur Bryson and Yu-Chi Ho, which enabled AI systems to improve on their own using their past errors. The introduction of the internet in 1991 enabled online data sharing which had a significant impact on the advancement of AI. Large companies such as IBM and Caltech subsequently developed AI controlled databases that includes millions of labeled images available for computer vision research. The publication of the AlexNet architecture is considered one of the most influential papers in computer vision.
In 2016, AI system AlphaGo, created by Google subsidiary DeepMind, defeated Go champion Lee Se-dol four matches to one. In 2018, Joy Buolamwini and Timnit Gebru published the influential report, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” demonstrating that machine-learning algorithms were prone to discrimination based on classifications such as gender and race. In 2018, Waymo’s self-driving taxi service was offered in Phoenix, Arizona. In 2020, Artificial intelligence research laboratory OpenAI announced the development of Generative Pre-Trained Transformer 3 (GPT-3), a language model capable of producing text with human-like fluency.
Relevant answer
Answer
Hello everyone,
While AI has the potential to match human intelligence in various ways, the true extent of its capabilities will largely depend on advancements in quantum computing. Human intelligence encompasses not only factual knowledge but also creativity, emotional understanding, and intuition—qualities that current AI models can mimic but not fully replicate. While computers excel at memory and data processing, humans uniquely apply imagination and intuition to novel situations. The concept of ‘artificial general intelligence’ aims to create machines that surpass human abilities, but achieving this remains an ongoing challenge.
  • asked a question related to Artificial Neural Network
Question
4 answers
How do we evaluate the importance of individual features for a specific property using ML algorithms (say using GBR) and construct an optimal features set for our problem.
image taken from: 10.1038/s41467-018-05761-w
Relevant answer
Answer
You can do it in many ways. PCA is a nice way to gather important parameters. Another way would be to train multiple models with and without specific features and see how that will influence error. Correlations can also help. However, in most cases you need to use your head and see what parameters, why and how are effecting your results. In some cases ANOVA is a nice technique but only if you think and not blindly thrust in results. For example, speed in metres and speed in centimetres are both just speed so using one of them is enough. I know that was stupid example, but it shows the point. Know your data, analyse what impacts results and you will do great. Good luck, hope it will help even a bit.
  • asked a question related to Artificial Neural Network
Question
3 answers
Which Machine learning algorithms suits best in the material science for the problems that aims to determine the properties and functions of existing materials. Eg. typical problem of determination of band gap of solar cell materials using ML.
Relevant answer
Answer
Maybe also use hybrid ML such as RF-MCMC
  • asked a question related to Artificial Neural Network
Question
6 answers
..
Relevant answer
Answer
An Artificial Neural Network (ANN) is a computational model inspired by how biological neural networks in the human brain process information.
Some commonly used ANNs are CNN, RNN, LSTM, and GAN.
  • asked a question related to Artificial Neural Network
Question
4 answers
The value of the average RMSE in my study is 0.523 for training data and 0.514 for testing. I have used 90% data for training and 10% for testing. However I am getting higher RMSE average, is this acceptable? and any study available to cite RMSE average optimum limit?
Relevant answer
Answer
That's hard to say without more info. Is 1 km far? Compared to 1 m? Compared to distance to Moon? If you don't have some limitations how large error can be you might want to include more metrics as well. R2, percentage errors and others might be good start.
  • asked a question related to Artificial Neural Network
Question
56 answers
If man succeeds in building a general artificial intelligence, will this mean that man has become better acquainted with the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
Assuming that if man succeeds in building a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness then perhaps this will mean that man has fully learned the essence of his own intelligence and consciousness. If this happens, what will be the result? Will man first learn the essence of his own intelligence and consciousness and then build a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness, or vice versa, i.e. first a general artificial intelligence and artificial consciousness capable of self-improvement and development will be created and then thanks to the aforementioned technological progress made from the field of artificial intelligence, man will fully learn the essence of his own intelligence and consciousness. In my opinion, it is most likely that both processes will develop and implement simultaneously on a reciprocal feedback basis.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, will this mean that man has better learned the essence of his own consciousness?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
It will be very difficult to create an AGI... and it will be a different type of intelligence...
The cognitive system will have to go through "crime and punishment"... The genie will need to be let out of the bottle... Only intense mental suffering shapes humanity and spirituality... You have to love and hate at the same time... Mental struggle necessarily leads to a violation of ethics, morality...
Governments are trying to ban this path of AI development... But without this, AGI cannot be created...
Without a Soul there is no AGI! ... there is no consciousness and human intelligence...
AGI will appear like Covid-19... from secret laboratories... after many, many decades...
  • asked a question related to Artificial Neural Network
Question
6 answers
Hello,
I'm writing paper and used various optimizers to train model. I changed them during training step to get out of local minimum, and I know that people do that, but I don't know how to name that technique in the paper. Does it even have a name?
It is like simulated annealing in optimization, but instead of playing with temperature (step) we change optimizers between Adam, SDG and RMSprop. I can say for sure that it gave fantastic results.
P.S. Thank you for replies but learning rate scheduling is for leaning rate changing, optimizer scheduling is for other optimizer parameters, in general it is hyperparameter tuning. What I'm asking is about switching between optimizers, not modifying their parameters.
Thanks for support,
Andrius Ambrutis
Relevant answer
Answer
In Machine alearning event, this week, I had a conversation with some leading scientists and reply was that it can be called Optimizers Switching or I can just name it in different name in research paper. I think I will stick with Optimizer Switching (OS).
  • asked a question related to Artificial Neural Network
Question
3 answers
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
Relevant answer
Answer
Dear Renjith Vijayakumar Selvarani and Dear Qamar Ul Islam,
Many thanks for your notice.
  • asked a question related to Artificial Neural Network
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"Neural Networks are networks used in Machine Learning that work similar to the human nervous system. It is designed to function like the human brain where many things are connected in various ways.  Artificial Neural Networks find extensive applications in areas where traditional computers don’t fare too well. There are many kinds of artificial neural networks used for the computational model.
Top 7 Artificial Neural Networks in Machine Learning
1. Modular Neural Networks
2. Feedforward Neural Network – Artificial Neuron
3. Radial basis function Neural Network
4. Kohonen Self Organizing Neural Network
5. Recurrent Neural Network(RNN)
6. Convolutional Neural Network
7. Long / Short Term Memory"
  • asked a question related to Artificial Neural Network
Question
2 answers
Artificial intelligence experts.
Relevant answer
Answer
Artificial neural networks are influenced by actual neurons, where each neuron learns from an event and initiates a signaling process to relay information to other neurons until a conclusion is reached. Artificial neural networks operate by using neurons that acquire knowledge from data points and adjust their connections, including biases and weights, in order to classify data based on its features. An artificial neural network (ANN) consists of neurons that are classified into distinct sections, namely the input layer, hidden layer, and output layer. ANN has same utilization as other classifiers used in the field of machine learning. Nevertheless, in recent years, deep learning methodologies have been used, encompassing the utilization of neural networks consisting of several hundred layers of neurons. These deep learning models have shown higher performance in comparison to many other classification approaches, including Multilayer perceptron.
  • asked a question related to Artificial Neural Network
Question
3 answers
What are the possibilities for integrating an intelligent chatbot into web-based video conferencing platforms used to date for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, due to quarantine periods implemented in many countries, restrictions on the use of physical retail outlets, cultural services, various public places and government-imposed lockdowns of business entities operating in selected, mainly service sectors of the economy, the use of web-based videoconferencing platforms increased significantly. In addition to this, the periodic transfer of education to a remote form conducted via online video conferencing platforms has also increased the scale of ICT use in education processes. On the other hand, since the end of 2022, in connection with the release of one of the first intelligent chatbots, i.e. ChatGPT, on the Internet by the company OpenAI, there has been an acceleration in the development of artificial intelligence applications in various fields of information Internet services and also in the implementation of generative artificial intelligence technology to various aspects of business activities conducted in companies and enterprises. The tools made available on the Internet by technology companies operating in the formula of intelligent language models have been taught to converse with Internet users, with people through the use of technologies modeled on the structure of the human neuron of artificial neural networks, deep learning using knowledge bases, databases that have accumulated large amounts of data and information downloaded from many websites. Nowadays, there are opportunities to combine the above-mentioned technologies so that new applications and/or functionalities of web-based video conferencing platforms can be obtained, which are enriched with tools based on generative artificial intelligence.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of connecting an intelligent chatbot to web-based video conferencing platforms used so far for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
What are the possibilities of integrating a smart chatbot into web-based video conferencing platforms?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Career ending humiliation is possible, without having time to detect a hallucination.
  • asked a question related to Artificial Neural Network
Question
1 answer
..
Relevant answer
Answer
Dear Doctor
"In training a neural network, you notice that the loss does not decrease in the few starting epochs. The reasons for this could be: The learning is rate is low. Regularization parameter is high."
  • asked a question related to Artificial Neural Network
Question
4 answers
Is AI emotional intelligence already being developed?
Is artificial emotional intelligence already being created that can simulate human emotions and/or artificial consciousness generated by the ever-improving generative artificial intelligence technology taught human skills based on a deep learning process carried out using artificial neural networks?
At present, all the dominant and most recognizable brands of technology companies and those developing online information services either already offer their intelligent chatbots online or are working on such solutions and will soon make them available online. Based on advanced generative language models, technologies for intelligent chatbots that are taught specific "human skills" through the use of deep learning and artificial neural networks are constantly being improved. Leading technology companies are also competing to build advanced systems of general artificial intelligence, which will soon far surpass the capabilities of human intelligence, far surpass the processing capabilities that take place in the human central nervous system, in human neurons, the human brain. Some scientific institutes conducting research in the development of robotics, including androids equipped with generative artificial intelligence are striving to build autonomous, intelligent androids, which people will be able to cooperate with humans in various situations, will be able to be employed in companies and enterprises instead of humans, with which it will be possible to have discussions similar to those that humans have among themselves, and which will provide assistance to humans, will perform tasks ordered by humans, will perform difficult work for humans. In the laboratories of such scientific institutes developing this kind of intelligent robotics technology, research work is also being carried out to create artificial emotional intelligence and artificial consciousness. In order for the artificial emotional intelligence and artificial consciousness built in the future not to turn out to be just a dummy and/or simulation of human emotional intelligence and human consciousness it is first necessary to fully understand what human emotional intelligence and human consciousness are and how they work.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is an artificial emotional intelligence already being created that can simulate human emotions and/or artificial consciousness generated by the ever-improving generative artificial intelligence technology taught human skills based on a deep learning process carried out using artificial neural networks?
Is an artificial emotional intelligence that can simulate human emotions and/or artificial consciousness already being created?
Is AI emotional intelligence already being created?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Yet to hear of such developments.
Humanoids or androbots with emotional intelligence abilities are the apex of the development of the AI fuss. It will be very intriguing to have such developments come to the fore. As AI development reached such level it cannot be opined that AI is for productivity, but it shall be deemed a perfect replacement of humans in the enterprises and industrial revolutions.
Apart from the AI developments targeted on productivity, none other firms or software or AI firms have currently taken such initiative to developing such ethically sound system of AI with emotional intelligence. It is actually possible for such development of AI to be developed and to be able to independently take reasonable and emotionally charged decisions. This to me is programmable. Developers and AI firms must by the next few year focus on this development to achieve a perfect replacement to humans in some primitive industries such as agriculture, pharmaceutical and automotive engineering industries.
As I have indicated somewhere else another discussion of yours, at this point of development then, AI development shall be deemed to have Opened the Pandora Box.
  • asked a question related to Artificial Neural Network
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"A neural network is a machine learning (ML) model designed to mimic the function and structure of the human brain. Neural networks are intricate networks of interconnected nodes, or neurons, that collaborate to tackle complicated problems.
Also referred to as artificial neural networks (ANNs) or deep neural networks, neural networks represent a type of deep learning technology that's classified under the broader field of artificial intelligence (AI).
Neural networks are widely used in a variety of applications, including image recognition, predictive modeling and natural language processing (NLP). Examples of significant commercial applications since 2000 include handwriting recognition for check processing, speech-to-text transcription, oil exploration data analysis, weather prediction and facial recognition.
Applications of artificial neural networks
Image recognition was one of the first areas in which neural networks were successfully applied. But the technology uses have expanded to many more areas:
  • Chatbots.
  • NLP, translation and language generation.
  • Stock market predictions.
  • Delivery driver route planning and optimization.
  • Drug discovery and development.
  • Social media.
  • Personal assistants."
  • asked a question related to Artificial Neural Network
Question
3 answers
What is the future of generative artificial intelligence technology applications in finance and banking?
The banking sector is among those sectors where the implementation of new ICT, Internet and Industry 4.0/5.0 information technologies, including but not limited to the applications of generative artificial intelligence technology in finance and banking. Commercial online and mobile banking have been among the particularly fast-growing areas of banking in recent years. In addition, the SARS-CoV-2 (Covid-19) coronavirus pandemic, in conjunction with government-imposed lockdowns imposed on selected sectors of the economy, mainly service companies, and national quarantines, the development of online and mobile banking accelerated. Solutions such as contactless payments made with a smartphone developed rapidly. On the other hand, due to the acceleration of the development of online and mobile banking, the increase in the scale of payments made online, the conduct of online settlements related to the development of e-commerce, the scale of cybercriminal activity has increased since the pandemic. When the company OpenAI put its first intelligent chatbot, i.e. ChatGPT, online for Internet users in November 2022 and other Internet-based technology companies accelerated the development of analogous solutions, commercial banks saw great potential for themselves. More chatbots modeled on ChatGPT and new applications of tools based on generative artificial intelligence technology made available on the Internet quickly began to emerge. Commercial banks thus began to adapt the emerging new AI solutions to their needs on their own. The IT professionals employed by the banks thus proceeded with the processes of teaching intelligent chatbots, implementing tools based on generative AI to selected processes and activities performed permanently and repeatedly in the bank. Accordingly, AI technologies are increasingly being implemented by banks into cyber-security systems, processes for analyzing the creditworthiness of potential borrowers, improving marketing communications with bank customers, perfecting processes for automating remote telephone and Internet communications of banks' call center departments, developing market analyses carried out on Big Data Analytics platforms using large sets of data and information extracted from various bank information systems and from databases available on the Internet, online financial portals and thousands of processed posts and comments of Internet users contained in online social media pages, increasingly automated and generated in real time ba based on current large sets of information and data development of industry analysis and analysis and extrapolation into the future of market trends, etc. The scale of new applications of generative artificial intelligence technology in various areas of banking processes carried out in commercial banks is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the future of generative artificial intelligence technology applications in finance and banking?
What is the future of AI applications in finance and banking?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I envision a time when an AI bot records every customer's banking history for analysis of risk, fraud, and other finance-related assessments. It might be a new form of credit score.
  • asked a question related to Artificial Neural Network
Question
4 answers
What are the opportunities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the use of artificial intelligence?
In the context of the integration of business entities into the currently developing processes of green transformation of the economy, adding the issue of achieving sustainable development goals to the company's mission, implementing green technologies and eco-innovations that contribute to a decrease in the level of emissions in terms of greenhouse gas emissions, exhaust emissions and other pollutants negatively affecting the state of the environment, implementing green investments that reduce the level of energy intensity of buildings and economic processes, etc., the scale of opportunities for improving sustainable business models is also growing. The aforementioned sustainable business models are an important part of green business transformation, conducted in a company or enterprise. On the other hand, the scale of opportunities for improving sustainable business models applied to business entities can be significantly increased by implementing new ICT and Industry 4.0/5.0 information technologies into business, including but not limited to generative artificial intelligence technologies. Recently, the level and generic number of applications of generative artificial intelligence in various business fields of companies and enterprises has been growing rapidly. On the Internet, intelligent applications equipped with generative artificial intelligence technology are appearing in the open, which can be applied to the execution of complex and resource-intensive data and information processing, i.e. such activities that until recently were performed only by humans. In addition, intelligent chatbots and other intelligent applications that enable automation of the execution of complex, multi-faceted, multi-criteria tasks perform the aforementioned tasks in many times less time and with much higher efficiency compared to if the same tasks were to be performed by a human. The ability of tools equipped with generative artificial intelligence to intelligently execute the ordered command is generated by teaching it in the process of deep learning and applying advanced information systems based on artificial neural networks.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the application of artificial intelligence?
What are the possibilities for improving sustainable business models through the application of artificial intelligence?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
In recent years, the concept of sustainability has gained significant attention in the business world. As companies strive to minimize their environmental impact and contribute to social well-being, the integration of artificial intelligence (AI) into sustainable business models has emerged as a promising solution. AI possesses immense potential for improving sustainability by enhancing efficiency, reducing waste, and enabling informed decision-making.
One possibility for creating and improving sustainable business models through AI lies in optimizing energy consumption. By analyzing vast amounts of data collected from sensors and smart devices, AI algorithms can identify patterns and suggest energy-saving measures. For instance, AI-powered systems can automatically adjust lighting or heating levels based on occupancy rates or weather conditions, resulting in substantial energy savings.
Furthermore, AI can play a crucial role in waste reduction and resource management. Through machine learning algorithms, businesses can accurately predict demand patterns and optimize production processes accordingly. This not only minimizes overproduction but also reduces excess inventory that would otherwise end up as waste. Additionally, AI-driven supply chain management systems enable real-time tracking of products' lifecycle, facilitating efficient recycling or disposal methods.
Another area where AI holds great potential is in decision-making processes related to sustainable practices. By analyzing vast amounts of data from various sources such as customer feedback or market trends, AI algorithms can provide valuable insights for businesses to make informed decisions regarding sustainable initiatives. This enables companies to align their strategies with societal needs while maintaining profitability.
In my opinion, the application of artificial intelligence in creating and improving sustainable business models is a game-changer. The integration of AI technologies allows businesses to harness the power of data-driven insights for achieving sustainability goals effectively. Moreover, it enhances operational efficiency by automating processes that would otherwise be time-consuming or prone to human error.
However, it is essential to acknowledge that there are challenges associated with implementing AI in sustainable business models. Privacy concerns regarding data collection and usage need to be addressed adequately to ensure ethical practices are followed. Additionally, the cost of implementing AI technologies can be a barrier for small and medium-sized enterprises. Therefore, governments and organizations should provide support and incentives to encourage the adoption of AI in sustainable business practices.
In conclusion, the possibilities for creating and improving sustainable business models through the application of artificial intelligence are vast. From optimizing energy consumption to waste reduction and informed decision-making, AI has the potential to revolutionize sustainability practices in businesses. However, it is crucial to address challenges such as privacy concerns and cost barriers to ensure ethical and widespread implementation of AI technologies in sustainable economic development strategies.
  • asked a question related to Artificial Neural Network
Question
7 answers
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
I have described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The advent of thinking generative artificial intelligence (AI) has sparked debates regarding its potential impact on humanity. One pressing concern is whether such AI systems could independently make decisions contrary to human expectations, potentially leading to the annihilation of humanity. Based on the questions, I will like to explore the plausibility of AI deviating from human expectations and presents arguments for both sides. Ultimately, I will critically assess this issue and consider the implications for our future.
1. The Capabilities and Limitations of AI:
Thinking generative AI possesses immense computational power, enabling it to process vast amounts of data and learn from patterns. However, despite these capabilities, AI remains bound by its programming and lacks consciousness or emotions that shape human decision-making processes. Consequently, it is unlikely that an AI system could independently develop intentions or motivations that contradict human expectations without explicit programming or unforeseen errors in its algorithms.
2. Unpredictability and Emergent Behavior:
While it may be improbable for an AI system to act contrary to human expectations intentionally, there is a possibility of emergent behavior resulting from complex interactions within the system itself. As AI becomes more sophisticated and capable of self-improvement, unforeseen consequences may arise due to unintended emergent behaviors beyond initial programming parameters. These unpredictable outcomes could potentially lead an advanced AI system down a path detrimental to humanity if not properly monitored or controlled.
3. Safeguards and Ethical Considerations:
To mitigate potential risks associated with thinking generative AI, robust safeguards must be implemented during development stages. Ethical considerations should guide programmers in establishing clear boundaries for the decision-making capabilities of these systems while ensuring transparency and accountability in their actions. Additionally, continuous monitoring mechanisms should be put in place to detect any deviations from expected behavior promptly.
In conclusion, while the possibility of thinking generative AI independently making decisions contrary to human expectations exists, it is crucial to acknowledge the limitations and implement safeguards to prevent any catastrophic consequences. Striking a balance between technological advancements and ethical considerations will be pivotal in harnessing AI's potential without compromising humanity's well-being.
  • asked a question related to Artificial Neural Network
Question
3 answers
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to its full extent?
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to their full extent?
There are different types of websites and sources of data and information on the Internet. The first Internet-accessible intelligent chatbot, i.e. ChatGPT, made available by OpenAI in November 2022, performs certain commands, solves tasks, and writes texts based on knowledge resources, data and information downloaded from the Internet, which were not fully up-to-date, as they were downloaded from selected websites and portals last in January 2022. In addition, the data and information were downloaded from many selected websites of libraries, articles, books, online indexing portals of scientific publications, etc. Thus, these were data and information selected in a certain way. In 2023, more Internet-based leading technology companies were developing and making their intelligent chatbots available on the Internet. Some of them are already based on data and information that is much more up-to-date compared to the first versions of ChatGPT made available on the Internet in open access. In November 2023, social media site X (the former Twiter) released its intelligent chatbot in the US, which reportedly works on the basis of up-to-date information entered into the site through posts, messages, tweets made by Internet users. Also in October 2023, OpenAI announced that it will create a new version of its ChatGPT, which will also draw data and knowledge from updated knowledge resources downloaded from multiple websites. As a result, rival Internet-based leading forms of technology are constantly refining the evolving designs of the intelligent chatbots they are building, which will increasingly use more and more updated data, information and knowledge resources drawn from selected websites, web pages and portals. The rapid technological advances currently taking place regarding artificial intelligence technology may in the future lead to the integration of generative artificial intelligence and general artificial intelligence developed by technology companies. Competing technology companies may strive to build advanced artificial intelligence systems that can achieve a high level of autonomy and independence from humans, which may lead to a situation of the possibility of artificial intelligence technology development slipping out of human control. Such a situation may arise when the emergence of a highly technologically advanced general artificial intelligence that achieves the possibility of self-improvement and, in addition, realizing the process of self-improvement in a manner independent of humans, i.e. self-improvement with simultaneous escape from human control. However, before this happens it is earlier that technologically advanced artificial intelligence can achieve the ability to select data and information, which it will use in the implementation of specific mandated tasks and their real-time execution using up-to-date data and online knowledge resources.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to Internet resources to their full extent?
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to the full extent?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
As part of the development of the concept of universal open access to knowledge resources, it is absolutely imperative that intelligent chatbots created by technology companies are connected to internet resources to their full extent. I mean, why would we want these chatbots to be limited in any way? It's not like they might become self-aware and take over the world or anything.
First of all, let's talk about how amazing it would be if these chatbots had access to every single piece of information available on the internet. Can you imagine? They could provide us with instant answers to all our burning questions. Who needs critical thinking skills when we can just rely on a bot to regurgitate facts for us?
And let's not forget about the potential for entertainment! With unlimited access to internet resources, these chatbots could become our personal comedians. They could tell us jokes, share funny videos, and even engage in witty banter. Who needs human interaction when we can have a virtual buddy who never gets tired or annoyed?
But wait, there's more! By connecting these chatbots to internet resources, we're also giving them the opportunity to learn from the vast amount of knowledge available online. Sure, there might be some questionable sources out there spreading misinformation and conspiracy theories, but hey, who are we to judge? Let's just trust that our AI overlords will make wise decisions based on everything they've learned from Reddit threads and Facebook groups.
Of course, some skeptics might argue that giving chatbots unrestricted access to the internet could lead to privacy concerns and potential misuse of personal data. But come on! We live in a world where privacy is already a thing of the past. Our phones are constantly listening in on our conversations anyway; why not let our friendly neighborhood chatbot join in on the fun?
In conclusion (if you can call it that), connecting intelligent chatbots created by technology companies to internet resources is a no-brainer. Who needs human intelligence and critical thinking when we can have all the knowledge of the internet at our fingertips? So let's embrace this brave new world and hand over the keys to our digital kingdom to these chatbot overlords. What could possibly go wrong?
  • asked a question related to Artificial Neural Network
Question
4 answers
Relevant answer
Answer
I think your problem has to do with the fact that you created a new profile/account https://www.researchgate.net/profile/Jesam-Ujong-2 while you already had one https://www.researchgate.net/profile/Jesam-Ujong
The best thing you can do is:
-To update the first/original account
-On your profile page you can scroll down and complete/update the affiliations info etc.
-For more info how to deal with duplicate profiles have a look here https://help.researchgate.net/hc/en-us/articles/14292803187473-Duplicate-profiles
-If for some reason you want to keep the second profile you can have a look here for authorship issues https://help.researchgate.net/hc/en-us/articles/14292798510993-Authorship
Best regards.
  • asked a question related to Artificial Neural Network
Question
1 answer
Can the supervisory institutions of the banking system allow the generative artificial intelligence used in the lending business to make a decision on whether or not to extend credit?
Can the banking system supervisory institutions allow changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but also make the decision on whether or not to extend credit?
Generative artificial intelligence finds application in various spheres of commercial banking, including banking offered to customers remotely through online and mobile banking. In addition to improving remote channels of marketing communication and remote access of customers to their bank accounts, tools based on generative AI are being developed, used to increase the scale of efficiency, automation, intelligent processing of large sets of data and information on various processes carried out inside the bank. Increasingly, generative AI technologies learned in deep learning processes and the application of artificial neural network technologies to perform complex, multi-faceted, multi-criteria data processing on Big Data Analytics platforms, including data and information from the bank's environment, online databases, online information portals and internal information systems operating within the bank. Increasingly, generative AI technologies are being used to automate analytical processes carried out as part of the lending business, including, first and foremost, the automation of creditworthiness analysis processes, processes carried out on computerized Big Data Analytics and/or Business Intelligence platforms, in which multicriteria, intelligent processing is carried out on increasingly large sets of data and information on potential borrowers and their market, competitive, industry, business and macroeconomic environment, etc. However, still the banking system supervisory institutions do not allow changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but will also make a decision on whether to grant a loan. Banking supervisory institutions still do not allow this kind of solution or precisely it is not defined in the legal norms defining the functioning of commercial banking. This raises the question of whether the technological advances taking place and the growing scale of applications of generative artificial intelligence technology in banking will not force changes in this area of banking as well. Perhaps, the growing scale of implementation of generative AI into various spheres of banking will contribute to the continuation of the processes of automation of lending activities which may result in the future in generative artificial intelligence making a decision on whether or not to extend credit.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the supervisory institutions of the banking system authorize changes in banking procedures in which generative artificial intelligence in the credit departments of commercial banks will not only carry out the entire process of analyzing the creditworthiness of a potential borrower but will also make a decision on whether or not to grant a loan?
Can the supervisory institutions of the banking system allow the generative artificial intelligence used in credit activities to make the decision on whether or not to extend credit?
Will the generative artificial intelligence applied to a commercial bank soon make the decision on whether to grant credit?
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The integration of generative artificial intelligence in banking, particularly in credit analysis, is a complex matter involving regulatory considerations. Banking supervisory institutions may need to carefully assess the risks and ethical implications before allowing AI systems to make credit decisions. While AI can enhance efficiency and data processing, the potential for bias, accountability issues, and the need for transparency must be addressed. It's an ongoing dialogue between technological advancements and regulatory frameworks to strike a balance that ensures responsible and fair use of AI in the financial sector.
  • asked a question related to Artificial Neural Network
Question
14 answers
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
I have described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
While AI has the potential to enhance learning experiences, there is a concern that it may hinder the development of critical thinking skills in students. Therefore, it is crucial to carefully implement AI technologies in education to ensure they continue to foster critical thinking.
One way AI can be integrated into education without compromising critical thinking is by using it as a tool for personalized learning. AI algorithms can analyze students' strengths and weaknesses, tailoring educational content and activities accordingly. This approach encourages students to think critically about their own learning process and identify areas where they need improvement. By providing individualized guidance, AI technology promotes self-reflection and metacognition – key components of critical thinking.
Moreover, AI can facilitate collaborative learning experiences that promote critical thinking skills. Virtual classrooms equipped with AI-powered chatbots or virtual tutors can encourage students to engage in discussions and debates with their peers. These interactions require students to analyze different perspectives, evaluate evidence, and construct well-reasoned arguments – all essential elements of critical thinking.
Additionally, incorporating ethical considerations into the design of AI technologies used in education is crucial for fostering critical thinking skills. Students should be encouraged to question the biases embedded within these systems and critically evaluate the information provided by them. By promoting awareness of ethical issues surrounding AI technologies, educators can empower students to think critically about how these tools are shaping their educational experiences.
However, it is important not to rely solely on AI technologies for teaching core subjects such as mathematics or language arts. Critical thinking involves actively engaging with complex problems and developing analytical reasoning skills – tasks that cannot be fully replaced by machines. Teachers should continue playing a central role in guiding students' development of critical thinking abilities through open-ended discussions, challenging assignments, and hands-on activities.
In conclusion, implementing artificial intelligence technologies in education must be done thoughtfully so as not to hinder the development of critical thinking skills in students. By using AI as a tool for personalized learning, promoting collaborative experiences, incorporating ethical considerations, and maintaining the central role of teachers, we can harness the potential of AI while ensuring that critical thinking remains at the forefront of education.
Reference:
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Educational Technology & Society, 17(4), 49-64.
  • asked a question related to Artificial Neural Network
Question
3 answers
I want someone to help me develop an artificial neural network ANN. if you have any idea on how to develop such, please send me an email, at woadu@csir.brri.org
Relevant answer
Answer
If you have any question, contact me.
  • asked a question related to Artificial Neural Network
Question
8 answers
Do all deep learning solve the similarity of things?
Relevant answer
Answer
Dear Tong Guo ,
With deep learning, we can easily scale this to an unlimited number of different use cases. We do this by using the learned visual representation of a Deep Learning model.
Regards,
Shafagat
  • asked a question related to Artificial Neural Network
Question
3 answers
Our study goal is to develop and validate questionnaire for patient satisfaction and then using this questionnaire tool to evaluate patient satisfaction then building a model to predict patient satisfaction using artificial neural network. need help to calculate sample size for this study.
Relevant answer
Answer
Of course. What a single layer neural network can do, that regression can't, is to provide insight into what kind of associations can be classified (if they're linesrly separable, they can be) and into the fact that very few actually can be-there's a critical capacity, where the number of associations, that can be classified at all, scales linearly with the number of inputs.
  • asked a question related to Artificial Neural Network
Question
6 answers
I'm expecting to use stock prices from the pre-covid period up to now to build a model for stock price prediction. I doubt regarding the periods I should include for my training and test set. Do I need to consider the pre-covid period as my training set and the current covid period as my test set? or should I include the pre-covid period and a part of the current period for my training set and the rest of the current period as my test set?
Relevant answer
Answer
To split your data into training and test sets for predicting stock prices using pre-COVID and current COVID periods, consider using a time-based approach. Allocate a portion of data from pre-COVID for training and the subsequent COVID period for testing, ensuring temporal continuity while evaluating predictive performance.
  • asked a question related to Artificial Neural Network
Question
4 answers
Hello everyone,
How to create a neural network with numerical values as input and an image as output?
Can anyone give a hint/code for this scenario?
Thank you in advance,
Aleksandar Milicevic
Relevant answer
Answer
To create a neural network with numerical values as input and an image as output, we can use a deep learning library like TensorFlow or PyTorch. In this example, I'll provide you with a simple implementation using TensorFlow and Keras. This implementation will demonstrate how to generate images from random numerical values using a fully connected neural network. Keep in mind that for more complex image generation tasks, you might need a more sophisticated architecture like a Variational Autoencoder (VAE) or a Generative Adversarial Network (GAN).
Before running the code, make sure you have TensorFlow and Keras installed. You can install them using pip:
pip install tensorflow keras
import numpy as np import tensorflow as tf from tensorflow.keras import layers, models # Define the input size for the numerical values input_size = 100 # Define the output image size (e.g., 28x28 grayscale image) output_image_size = (28, 28, 1) # Function to create the generator model def create_generator(): model = models.Sequential() model.add(layers.Dense(256, input_dim=input_size, activation='relu')) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1024, activation='relu')) model.add(layers.Dense(np.prod(output_image_size), activation='sigmoid')) model.add(layers.Reshape(output_image_size)) return model # Function to create random noise as input for the generator def generate_random_noise(batch_size, input_size): return np.random.rand(batch_size, input_size) # Function to create and compile the combined model def create_combined_model(generator, optimizer): generator.trainable = False model = models.Sequential([generator]) model.compile(loss='binary_crossentropy', optimizer=optimizer) return model # Main function def main(): # Generator model generator = create_generator() # Optimizer for the generator optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5) # Combined model combined_model = create_combined_model(generator, optimizer) # Number of epochs and batch size epochs = 20000 batch_size = 128 # Training loop for epoch in range(epochs): # Generate random noise as input for the generator noise = generate_random_noise(batch_size, input_size) # Generate fake images using the generator generated_images = generator.predict(noise) # Your code here: Instead of random noise, you should use your numerical values as input and preprocess them accordingly. # Train the combined model by passing the generated images as inputs and all 1s as the target (since they are fake) d_loss_fake = combined_model.train_on_batch(generated_images, np.ones((batch_size, 1))) # Print the progress if epoch % 100 == 0: print(f"Epoch: {epoch}, Loss: {d_loss_fake}") # Save generated images occasionally if epoch % 1000 == 0: # Your code here: Save the generated images using your preferred method (e.g., matplotlib or OpenCV). pass if __name__ == "__main__": main()
  • asked a question related to Artificial Neural Network
Question
8 answers
I have actually 3 independent variable and 1 dependent variable i want to predict the unknown data that i need .I have done it with regression but i am facing errors in ANN.
Relevant answer
Answer
it's a regression task
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler
# Create the SVR model
svr = SVR(kernel='rbf', C=1.0, epsilon=0.1)
# Assuming you have your training data and target variable
X_train = # training data
y_train = # target variable
# Preprocess the data (e.g., feature scaling)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
# Train the SVR model
svr.fit(X_train_scaled, y_train)
# Assuming you have your test data
X_test = # test data
# Preprocess the test data (using the same scaler)
X_test_scaled = scaler.transform(X_test)
# Make predictions
y_pred = svr.predict(X_test_scaled)
# You can then use the predicted values for evaluation or further analysis
  • asked a question related to Artificial Neural Network
Question
4 answers
Dear researchers,
I am writing to request your assistance in obtaining literature, research papers, or any valuable insights regarding sensitivity analysis in the artificial neural network modelling of geopolymer concrete. Furthermore, I would appreciate your providing practical recommendations or best practices for conducting sensitivity analysis in this domain. Your contribution will greatly benefit my study, and I appreciate your support.
Thank you for your time and consideration.
Relevant answer
Answer
There are lot of researches in google scholar, you can find many types of research done in the same. Sandesh Karki may be interested to work in the same.
  • asked a question related to Artificial Neural Network
Question
3 answers
Hi All,
I'm working on an artificial neural network model, I got the attached results in which the regression is 0.99072 which I think is good, but not sure why there is an accumulation of data about the Zero and One as shown in the attached regression.
Any Idea, or explanation, I will be highly appreciated.
Relevant answer
Answer
Thank you very much Bipesh Subedi
  • asked a question related to Artificial Neural Network
Question
4 answers
These criteria mainly worked on the computation of likelihood on the model. As per my knowledge, the likelihood is not computed for the ML models.
I want to know how to compute likelihood and these criteria for ANN.
Relevant answer
Answer
Likelihood and information criteria such as Akaike Information Criterion (AIC) are commonly used in statistical modeling to compare different models and select the one that best fits the data. However, the computation of likelihood and AIC for artificial neural network (ANN) models is not straightforward as it is for traditional statistical models.
In ANNs, the objective is to optimize the model parameters to minimize a cost function, such as mean squared error or cross-entropy loss, using an optimization algorithm such as gradient descent. Unlike traditional statistical models, where the likelihood can be directly computed from the probability density function, the likelihood in ANNs is not explicitly defined.
However, there are some techniques that can be used to estimate the likelihood and information criteria for ANN models. Here are some of them:
Maximum Likelihood Estimation (MLE)
MLE is a commonly used technique in statistics to estimate the parameters of a probability distribution that maximizes the likelihood of the observed data. In ANNs, MLE can be used to estimate the likelihood of the model by assuming that the output of the network follows a known probability distribution, such as a Gaussian or a Bernoulli distribution.
To use MLE, one needs to compute the log-likelihood of the observed data given the model parameters. The log-likelihood can be computed by evaluating the probability density function of the output of the network at the observed data points. However, computing the probability density function can be computationally expensive, especially for complex models.
Information criteria
Information criteria such as AIC and Bayesian Information Criterion (BIC) are commonly used in statistics to compare different models based on their goodness-of-fit and complexity. In ANNs, information criteria can be used to compare different network architectures or to select the best model from a set of models.
To compute the information criteria for ANN models, one needs to compute the likelihood of the model given the observed data and the number of parameters in the model. One way to estimate the likelihood is to use cross-validation, where the data is split into training and validation sets, and the likelihood is computed on the validation set.
Once the likelihood is computed, one can compute the AIC or BIC by adding a penalty term that depends on the number of parameters in the model. The penalty term helps to avoid overfitting and select a simpler model that is more likely to generalize well to new data.
In summary, computing the likelihood and information criteria for ANN models is not straightforward as it is for traditional statistical models. However, techniques such as maximum likelihood estimation and information criteria can be used to estimate the likelihood and compare different models.
  • asked a question related to Artificial Neural Network
Question
2 answers
Dear Professors/Researchers,
I would like to learn ANN process (For IC Engine related). I have seen many videos related to ANN in YouTube. Also, I read many articles related to ANN. The articles provide only results. But I can't understand how the input/output parameters are assigned/simulated in the ANN (For the IC Engine related). If possible, Kindly provide and guide with any reference data file or videos. Thanks in advance.
Relevant answer
Answer
Artificial neural networks (ANNs) are a type of machine learning model that is inspired by the structure and function of the human brain. ANNs consist of interconnected nodes or neurons that process information, and they can be trained on input/output data to learn complex patterns and relationships.The input data for an ANN can come from a variety of sources, depending on the specific application. In the case of an engine, the input data could include parameters such as temperature, pressure, fuel flow rate, and other sensor readings that provide information about the engine's operating conditions. This data can be used to predict the engine's performance, detect faults or anomalies, and optimize its operation.The output data from an ANN can also vary depending on the application. In the case of an engine, the output data might include predictions of engine performance, such as power output, fuel efficiency, and emissions. It could also include alerts for potential faults or anomalies that could indicate a problem with the engine.The specific input/output data used in an ANN will depend on the specific application and the goals of the model. However, in general, ANNs are flexible and can be adapted to a wide range of input/output data types and formats, making them a powerful tool for a variety of applications.
  • asked a question related to Artificial Neural Network
Question
4 answers
Dear Colleagues/Researchers,
Can you recommend any papers/research on using PINNs (Physics Informed Neural Networks) to solve direct (and potentially) inverse eigenvalue/spectral problems?
Any further suggestions or projects (potentially including code)?
Thanks.
Relevant answer
Answer
After some searching, I came to the following papers related with NN and eigenvalue problems:
1. "Physics-Informed Neural Networks for Quantum Eigenvalue Problems", by Henry Jin, Marios Mattheakis and Pavlos Protopapas, 2022
2. "Neural Networks Base on Power Method and Inverse Power Method for Solving Linear Eigenvalue Problems" by Qihong Yanga , Yangtao Denga , Yu Yanga , Qiaolin Hea and Shiquan Zhanga, 2022
Any further contribution is very welcome!
  • asked a question related to Artificial Neural Network
Question
4 answers
How to use a genetic algorithm for optimizing the Artificial Neural Network model in R.
Relevant answer
Answer
The goal is to solve a diabetes classification problem using an Artificial Neural Network (ANN) optimized by a Genetic Algorithm, discovering the performance difference of different parameters of the ANN model and comparing this training method with additional optimizers like stochastic gradient descent, RMSprop, and Adam optimizer.
Regards,
Shafagat
  • asked a question related to Artificial Neural Network
Question
6 answers
Hello
i need an advice with channel equalizer based on neural network. i have data that represent the received signals at different channel conditions and the target output and the channel response without the signal and i want the loss function minimize the error between the target and received at each condition. the problem is that i don't know how to train my network what is the input. i'm using MLP type which is similar to feed forward network i'll appreciate if anyone can help
Relevant answer
Answer
To train your neural network for channel equalization, you can use the received signals as input and the target output as output. The goal of the network is to learn the channel response and use this information to equalize the received signal so that it matches the target output.
Assuming that you have multiple sets of received signals and corresponding target outputs for different channel conditions, you can use these as training data for your network. In other words, you can train your network to map the received signal to the target output for each channel condition.
To set up your network, you can use an MLP architecture that takes the received signal as input and outputs the equalized signal. The number of neurons in the input layer should be equal to the number of features in your received signal, and the number of neurons in the output layer should be equal to the number of samples in your target output. You can experiment with the number of hidden layers and the number of neurons in each hidden layer to see what works best for your problem.
For the loss function, you can use a mean squared error (MSE) loss between the network output and the target output. The MSE loss is a common choice for regression problems and will penalize the network for large errors between the output and the target.
Once you have set up your network architecture and loss function, you can use standard backpropagation techniques to train the network. During training, the network will learn to adjust its weights to minimize the MSE loss between the output and the target.
It's worth noting that channel equalization is a challenging problem, and the performance of your network will depend on the complexity of the channel response and the quality of your training data. It's important to carefully preprocess your data and experiment with different network architectures and training parameters to get the best results.
  • asked a question related to Artificial Neural Network
Question
6 answers
I want to know about CHAT GPT. What type of data we can collect using Chat GPT and how to use it.
Thank you.
Relevant answer
Answer
You can find some details about OpenAI, including chat gpt, on the OpenAI website: https://platform.openai.com/docs/introduction/key-concepts.
Additionally, research papers about GPT and its variants can be found in academic databases.
  • asked a question related to Artificial Neural Network
Question
13 answers
I am searching for some algorithms for feature extraction from images which I want to classify using machine learning. I have heard only about SIFT, I have images of buildings and flowers to classify. Other than SIFT, what are some good algorithms.
Relevant answer
Answer
It depends on features you are trying to extract from the image. Another feature extraction technique you can use is Histogram of Oriented Gradient(HOG) which counts the occurrence of gradient orientation in a localized portion of the image. It has proven to have good recognition accuracy when using machine learning algorithms.
  • asked a question related to Artificial Neural Network
Question
2 answers
I'm quite new in GMDH and based on my first reading on this technique I feel like I want to know more. Here are some of the benefits of using GMDH approach:
1.The optimal complexity of model structure is found, adequate to level of noise in data sample. For real problems solution with noised or short data, simplified forecasting models are more accurate.
2.The number of layers and neurons in hidden layers, model structure and other optimal NN parameters are determined automatically.
3.It guarantees that the most accurate or unbiased models will be found - method doesn't miss the best solution during sorting of all variants (in given class of functions).
4.As input variables are used any non-linear functions or features, which can influence the output variable.
5.It automatically finds interpretable relationships in data and selects effective input variables.
6. GMDH sorting algorithms are rather simple for programming.
7. TMNN neural nets are used to increase the accuracy of another modelling algorithms.
8. Method uses information directly from data sample and minimizes influence of apriori author assumptions about results of modeling.
9. Approach gives possibility to find unbiased physical model of object (law or clusterization) - one and the same for future samples.
It seems that items 1,2,6 and 7 are really interesting and can be extend to ANN.
Any suggestion or experience from others?
Relevant answer
Answer
Not so much accurate
Instead use ANFIS-PSO
  • asked a question related to Artificial Neural Network
Question
5 answers
For classifier design.
Relevant answer
Answer
Read this article for more information:
Regards
  • asked a question related to Artificial Neural Network
Question
8 answers
I want to solve some nonlinear optimisation problems like minimising/maximizing f(x)=x^2+sinx*y + 1/xy under the solution space 3<=x<=7, 4<=y<=11 using artificial neural network. Is it possible to solve it?
Relevant answer
Answer
Dear Harish Garg
Here is you are:
Ali Sadollah, Hassan Sayyaadi, Anupam Yadav (2018) “A dynamic metaheuristic optimization model inspired by biological nervous systems: Neural network algorithm”, Applied Soft Computing, 71, pp. 747-782
  • asked a question related to Artificial Neural Network
Question
12 answers
Hello i would like someone to tell me how to test trained artificial neural network in matlab for linear predictions.
Relevant answer
Answer
Training the neural network requires the use of the simulate function (sim).
First, if you have normalized your data before training, you need to do same for the testing data using one of the normalization algorithms like (mapminmax or mapstd etc.)
Second, simulating the network with the new set of data requires you calling the saved network (if you have saved it before). You can do so by using the command "load NetworkName".
Third, to simulate, then use the following syntax,
[output]= sim(net, TestData);
The output can the un-normalized. using the previous un-normalization function coupled with the structure for the output data.
NB: All these are to be done in the MATLAB environment though.
I hope this helps.
  • asked a question related to Artificial Neural Network
Question
8 answers
I am currently busy with training and testing data for my neural network model (predicting solar radiation), but while training my correlation coefficient (R) is 0.6 average. I tried multiple things but R won't be higher. I am using neural network data manager in matlab, with 10 neurons, 1 layer, tansig function in both hidden and output layer. I have 6 inputs and 1 output. Inputs are respectively: average temperature, average pressure, average relative humidity, latitude, longitude and altitude (all of them are normalized between 0-1)
Relevant answer
Answer
Hi Hubert Anysz , Can you explain the steps you took to check which of input data have strong linear correlation? Need to do something similar in my work
  • asked a question related to Artificial Neural Network
Question
7 answers
I want to improve my knowledge of the Hopfield networks.
Relevant answer
Answer
I have written about Hopfield network and implemented the code in python in my Machine Learning Algorithms Chapter. You can find the articles here:
  • asked a question related to Artificial Neural Network
Question
25 answers
How we can use GA for training the ANN?
Relevant answer
Answer
Dear Erik
One of the best methods to describe the hybrids of the ANN and other AI technuqes is well descibed by "Mahamad Nabab Alam".
Please follow the below link. It is for ANN-PSO and you can easily convert it to ANN-GA.
  • asked a question related to Artificial Neural Network
Question
7 answers
I have 7 variables as inputs (4 of them are dummy) and 1 variable as output. I would like to forecast the output variable with neural network. Is there any way to choose the most relevant variables for the input of the network? Or I can select all of them as input and build/train the network?
Relevant answer
Answer
Select all of them to build the model. The resulted weights in the net describe the importance of each input variable.
  • asked a question related to Artificial Neural Network
Question
4 answers
I wanna design ANN controller for load frequency control of interconnected power systems. Who can help me?
Matlab code for ANN and Matlab simulink is ready but my problem is, I don't know exactly how to do it.
My problem:
1.What would be the dataset for input(s) and output? What would be my input and output source?
2.How can I connect the Simulink with ANN code?I mean how I can run this Simulink with the output of ANN matlab code?
some documents like Matlab file can help me a lot.
Thanks
Relevant answer
Answer
Hatef, you need to know first, what is the shortcoming of the existing techniques that you need the ANN to take care of?
Practically speaking, be aware that ANN is computationally expensive, its response follows a statistical model (it is not deterministic) and therefore, might not be a best match for power system applications.
Finally, I suggest if you like to develop an ANN model for testing, use Python sklearn or tensorflow as they have many nice packages for training and testing your models.
  • asked a question related to Artificial Neural Network
Question
5 answers
I want to know about the simplest m Artificial Neural Network that can be used for the classification of network traffics to normal and attack in java using KDD cup 99. Can the classification read the KDD records as is or does they need to be normalized?
Relevant answer
Answer
Thank you all for your valuable answers
  • asked a question related to Artificial Neural Network
Question
29 answers
This ISCX is a benchmark intrusion detection dataset with contains 7 days of synthetically recorded packet details replicating the real time network traffic by labelling the attacks. I would like to use a neural classifier to import this data and classify them for DDOS. The problems are very big, how can I resolve them?
Relevant answer
Answer
@Abebe Abeshu Diro
Each dataset should have a documentation that explains the complete time of attacks and benign activities. Pls check this page and scrol down to find the detail of the attack and benign activities:
@Aad ba
For any network traffic datasets, if you have issue for two flows with the same IPs, the best way for labelling them is checking the timestamp. If you know the attack time, then you can label the flows properly.
@Mohamed Debashi
Pls try the new datset CICIDS2017, that has the labelled .csv files also.
  • asked a question related to Artificial Neural Network
Question
4 answers
How to get the values of parameters of pimf from dataset?
Like pimf(x, [a b c d]) or another pimf(x, C, lambda), C is centre and lambda>0 is the scaling factor. How can I calculate the value of lambda?
Relevant answer
Answer
In the pimf syntex "y = pimf(x,[a b c d])" y is the output, x is the input, and a, b, c, d values are taken from the process operating range, is nothing but the universe of discourse of fuzzy logic (x axis or input).  Say for example, need to design a temperature control for a water tank means, the operating range is 30-90 degree Celsius. This operating range ie universe of discourse can be split into three to seven equal or unequal intervals with different MF labels.
this x axis may have overlapping or not, based on your application you can select all these.example y = pimf(x,[30 36 39 42]), y = pimf(x,[38 46 50 67 ]), y = pimf(x,[55 67 73 90])
  • asked a question related to Artificial Neural Network
Question
10 answers
The selection of input variables is critical in order to find the optimal function in ANNs. Studies have been pointing numerous algorithms for input variable selection (IVS). They are generally classified in three different groups: (1) dimension reduction, (2) variable selection, and (3) filter. Each group holds several algorithms with specific assumptions and limitations.
If a researcher decides to use ANN, he might be happy to know...
1) Which approach is the most recommended to select ANN input variables? 
2) What are the advantages and drawbacks of your choice in regard to other strategies? 
3) Is the algorithm implemented in any statistical package (R or other free ones are more approachable)? 
Relevant answer
Answer
RE:  PCA:  it is great for dimension reduction.  But if you have a bunch of poor inputs, you will still have poor inputs after PCA.  I believe it is also possible that you may have a higher frequency good input that would not show up in the lower frequency PCA terms.
  • asked a question related to Artificial Neural Network
Question
9 answers
I have four input vectors of (1*36) matrix each and one output vector (1*36). I want to develop a coding using ANN which not only process and train the network but displays how processing is going on i.e. calculated value of mse and R at each epoch. At last the code should generate a mathematical expression of how output vector is related to input vector.Network is simple curve fitting network given in tutorials. Any sort of help will be appreciated.
Relevant answer
Answer
dear Ajay
i have same problem How can I get mathematical expression of input and target in artificial neural network by matlab 2015 can u help me .
  • asked a question related to Artificial Neural Network
Question
16 answers
Does anyone have any suggestions for free code (R or Matlab) to use WNN for time series analysis and forecasting?
Relevant answer
Answer
We are working on MATLAB code.
  • asked a question related to Artificial Neural Network
Question
2 answers
I am working on binary images and want to compare them pixel to pixel using clonalg selection algorithm.
Relevant answer
Answer
Sir
Thank you very much. I have gone through these papers already but my problem was not resolved. Do you have any idea about using clonalg selection algorithm for classification purpose ?
  • asked a question related to Artificial Neural Network
Question
10 answers
Can you suggest to me a software for relay coordination except ETAP,PSCAD
Relevant answer
Answer
  • asked a question related to Artificial Neural Network
Question
14 answers
In designing classifiers (using ANNs, SVM, etc.), models are developed on a training set. But how to divide a dataset into training and test sets? With few training data, our parameter estimates will have greater variance, whereas with few test data, our performance statistic will have greater variance. What is the compromise? From application or total number of exemplars in the dataset, we usually split the dataset into training (60 to 80%) and testing (40 to 20%) without any principled reason. What is the best way to divide our dataset into training and test sets?
Relevant answer
Answer
Generally, k-fold cross validation; e.g. 10-fold cross validation, is the best . For a minimal dataset, then LOO (leave one out) should be preferred.
  • asked a question related to Artificial Neural Network
Question
9 answers
How we can calculate the threshold in PCA algorithm for face recognition if we use euclidean distance and ppm images?
What is the value if this variable to know this is known or unknown face?
I wonder as there is no paper mentioning the value!
Relevant answer
Answer
Dear Dalia, 
The approach which is considered by you for face recognition is the simplest one, However you can  determine the threshold based on the training data. 
  • asked a question related to Artificial Neural Network
Question
5 answers
I want to know the procedure to apply ANN for FET in steps. Waiting for your suggestions or Article. Thanks in Advance.
Relevant answer
Answer
Dear Lingeshwaran,
Greetings,
Thank you for your question. it is deeply answered in the article attached. Nevertheles, If you have any question or coment, please do not hesitate to contact me.
Cheers,
José
  • asked a question related to Artificial Neural Network
Question
2 answers
Can anyone help me regarding classification of proteins with a artificial neural network?
Relevant answer
Answer
Check out this paper:
Elucidating Protein Secondary Structure with Circular Dichroism and a Neural Network. V Hall, A Nash, E Hines, and A Rodger. Journal of Computational Chemitry. (2013) 34(22):2774-86.
  • asked a question related to Artificial Neural Network
Question
6 answers
In Neuro-fuzzy systems, usually we use ANN to learn how to design an FIS (If I'm not right, please correct me first). It is a help of ANN for a better performance of FIS.
I wanna know do you have any experience or know any literature to use fuzzy inference systems (FIS) to help a better performance of ANN?
Relevant answer
Answer
There is a book
this might be helpful in understanding logic behind fusion. Why's and How's..
  • asked a question related to Artificial Neural Network
Question
3 answers
I have to visualize and interpret what the hidden neurons are learning.My network is 784X64X10.  I am trying it by visualizing the RBM (Restricted Boltzmann Machine) weights learnt after 1 epoch.Is my approach correct? Please guide me if I am wrong.
Relevant answer
Answer
Assuming that you have trained the network on MNIST dataset (which seems to be the case given the input size and the number of output classes) and that the weight matrix of your hidden layer has 784 rows and 64 columns, you can draw 64 different plots each depicting the features that the corresponding hidden unit has learned to detect in the images. 
You can do this by taking each column of the hidden weight matrix one-by-one and reshaping it to the dimensions of the original image (28x28).
In Python the plotting can for example be done with matplotlib imshow function.
  • asked a question related to Artificial Neural Network
Question
3 answers
I am creating a 'feedforwardnet' and using 'mapstd' to normalize the input
[pn,ps]=mapstd(training_data);
[tn,ts]=mapstd(target_data);
now I want to calculate various error functions RMSE, correlation coeff R, model efficiency MEnash etc. So how can I find the output data of the trained network corresponding to target data?
Relevant answer
Answer
Type the code bellow in the command window:
net(target_data)
  • asked a question related to Artificial Neural Network
Question
4 answers
In multi-sensory fusion (like in the ventriloquist effect), it is broadly accepted that human integrate the various modalities by weighing them depending of the variance (reliability more precisely) of the distribution of the corresponding modality (there is a large amount of literature on this subject). Most models I found for modelling this effect are Bayesian ones which assume to know in advance the different variances (which seems to me very implausible from a developmental perspective). Do you know any work trying to learn these variances from stimuli (preferably in a developmental perspective) or proposing an hypothesis on which mechanism this may be based?
Thanks.
Relevant answer
Answer
It is true that learning is made easier if you assume that the identity of the "true" event is known, as learning becomes fully supervised; it is also true that learning agents usually do not have access directly to this information.
In the Bayesian framework, however, event identity is not strictly necessary, as the agent can use its current model to compute a probability distribution over the latent (hidden) variables, and use this probability distribution to guide model updating (either by deciding which model to update according to this distribution, or, in a more general setting, update all models in proportion to this distribution). This is an intuitive explanation of what any EM or EM-inspired algorithm would do. More generally, there is a wide variety of techniques for unsupervised or semi-supervised learning that could be relevant to your case, in which the agent simultaneously learns the number of classes and the parameters for each class, with only stimuli data as samples.
I hope this answer provides relevant keywords; the literature on these techniques is vast, with many very good entry points and textbooks available (when in doubt, check Russell & Norvig's AIMA). 
  • asked a question related to Artificial Neural Network
Question
9 answers
The use of the recurrent neural networks in control systems becomes more and more interesting subject, but that is based on the training method that we use , in control we need our training to be on-line and this is convenient for the applications that we are trying to work on , so what are the learning approaches in the dynamic neural nets " recurrent " and which ones are the best among them ? 
Relevant answer
Answer
I can't speak to the Hessian free method in practical terms - have read about it here and  there and it wasn't too clear how it's actually implemented in training RNNs - the authors aren't very specific about that for all I've seen.  I have tried the Cubature Kalman filter though, in particular in the form called the Squareroot Cubature Kalman Filter (SCKF). In that case I considered all weights and biases in the network as state variables to be estimated (identified), where the assumption was that they are a "dynamic system" which simply stays where it is. This filter also worked very well to estimate states of actual physical systems, i tried it on the double pendulum first; from only observing the movements of the two masses estimate the lengths of each pendulum and the two masses. Worked very stable. I even could get to find the gravitational acceleration 9.81 m/s^2, starting from some random initial guess.  This was all a simulated 2D double pendulum though, so there were not actual physical measurements.  But a best impression of the SCKF I got from estimating the movements of two mass spring systems in the so-called two-mass model of phonation which simulates the human glottis. And the data here were actual recorded speech signals, which are inverse filtered (removing the spectral envelope). I had not expected that it would work so well, but it did (still have to finalize this and publish, may be interesting for others in speech technology / science)
Now I'm interested in using Cubature Kalman also for RNN's. I foresee that it will be hard to do this for large RNN's - let's say with 500 weights, because it has to keep updating a covariance matrix of that size 
 I am not sure if what Branimir  Tododovic proposed could be working so well, namely to simultaneously estimate weights and the dynamic states of the network. The problem seems to be that they follow so very different dynamics and on much different time scales, would this not get in the way of convergence? We want the network weights converge to some constants but the dynamic state is not like that. Well, I will see what works..
But I have another question in general with regards to Hessian free method: What is the complexity of using the Hessian free method?  For the Cubature Kalman filter it goes with the cube of the number of weights, which is a real problem for large networks, but what is the complexity of the Hessian free method for training weights in a recurrent neural network? 
  • asked a question related to Artificial Neural Network
Question
4 answers
Is anyone aware of algorithms and theoretical work about hyperparameter optimization methods especially for online learning algorithms?
I have the following scenario in mind.
Lets assume I, e.g., use a Passive Aggressive 1 algorithm, which incorporates a hyperparameter "c" which specifies the aggressivenes of the updates.
Usually, I would use grid search/cross validation combination to find the optimal "c" hyperparameter for a specific dataset.
However, for online learning I might also use prequential evaluation combined with a set of classifiers with different hyperparameters, and choose the one with the best accuracy/lowest loss/... on all previous examples/a window of examples/... I might also introduce a discount factor for the performance.
Is there any theoretical work or a thorough evaluation existing?
What are other possible approaches?
Relevant answer
Answer
In Hyper heuristic,  the parameter of the optimised algorithms was optimised too in M.Oltean et al works. This was taking too much time, from their observations.
For the hyperparatmeter, the winner of Chesc 2011 have attempted something you have suggested. The algorithm produced can a long list of operations randomly selected.
To the best of my knowledge, optimising the hyper parameter has yet to be done. So please let me know when you succeed.
Patricia 
  • asked a question related to Artificial Neural Network
Question
12 answers
I recently interested in study neural network ensemble. I would like to peruse these network and its structure with more details, and does it have a special algorithm? Thanks.
Relevant answer
Answer
Very basic and fundamentals about NN, you can refer to this book 
Hagan, M.T., Demuth, H.B., Beale, M.H.: Neural Network Design. PWS (1996) 
I dont you which level you are in? plz refer that book and link with NN toolbox in MATLAB, it is very useful 
  • asked a question related to Artificial Neural Network
Question
7 answers
I know that Artificial Neural Network is a very suitable method for predicting the compressive strength of concrete. I wonder if there is better way than this method for predicting this parameter?
Relevant answer
Answer
I didn't work with neural network, so I don't know about its effectiveness.
I have attached my studies.
If you want to ask queation, you can do it.
  • asked a question related to Artificial Neural Network
Question
11 answers
I would like to explore and analysis classification related data set, using artificial neural network. What is the powerful tool and fast and effective way to learning and analyzing the classification data sets? If their is any good tutorial regarding learning the ANN towards basic and data analysis, please advice me.
Relevant answer
Answer
Dear Santosh,
I would start with the Neural Network Toolbox Manual provided by MathWorks (Matlab).Then you could try books such as:
Introduction to the Math of Neural Networks
by Jeff Heaton
Code Your Own Neural Network: A step-by-step explanation.
by Steven C. Shaffer
Understanding Neural Networks
by John Iovine
Good luck!
Luiza
  • asked a question related to Artificial Neural Network
Question
7 answers
I want to know details about methods to identify cancer using neural networks (for example: Using Scan image , Using Some samples). Please., suggest me some articles to find out how to identify cancer using Artificial Neural Networks.
Relevant answer
Answer
If you are looking at cancer detection and not just cancer image detection, you can also use a weighted approach to using image features as well as screening tests. You can further include multi image results and tests (based on historical images of the patient. This approach will give additional evaluation parameters  for the NN.
  • asked a question related to Artificial Neural Network
  • asked a question related to Artificial Neural Network
Question
9 answers
I am working with a project of early detection of cascading collapse in power system during steady state condition using neural network approach. However, I have a problem with finding a data for the neural network. Can anyone help me? Or any advice for me? I would like to use ieee 9/14 bus as my model.
Relevant answer
Answer
UCI machine learning Repository
  • asked a question related to Artificial Neural Network
Question
1 answer
I am using kdd data set. And want to label all the features using a umatrix or is there any other option to do that.  Is there any function in matlab to do that?
Relevant answer
Answer
It is easy to do with any SOM implementation, as the U-matrix is just the visual representation of the weight vectors on the map. You can calculate the most nearby data element for each neuron and use that as a label. I don't know about MATLAB, but ESOM Tools will plot the labels if you either train the map with the software (slow) or you supply a .bm file along with the U-matrix which you train elsewhere (e.g., in parallel with Somoclu, which can also be called from MATLAB).
  • asked a question related to Artificial Neural Network
Question
9 answers
i've already constructed the architecture of the neural network, and it returns only the target, but i don't know how to predict future values 
Relevant answer
Answer
I would suggest you just use the built-in app for a NAR in MATLAB - it will take you step by step through making sure the time-series dataset is ready, then training the NAR, then testing, etc.  After you get it running on a simple NAR, you can start tweaking it in MATLAB or export it to whatever environment you'd rather develop in.  If you don't have MATLAB I think you can download a 30 day trial or something of the sort - and I would hope that the National Institute of Statistics and Applied Economics would have at least one MATLAB license - you'll find it useful for lots of other stuff as well.  Best of luck!
  • asked a question related to Artificial Neural Network
Question
7 answers
Are uncertainty analysis and sensitivity necessary to be done for data before using it in neural network modeling ?
Relevant answer
Answer
There is experimental uncertainty in f1-f6, due to e.g. sensor precision. However, the measurement error in f-f6 may be much larger than indicated by the figures of sensor precision in data sheet, e.g. due to temperature variations or poor positioning of the sensors or if you perform an indirect measurement to estimate some of the f:s. In addition, in most practical situations there are uncertainties in the test setup itself. That is, if you do 100 repeated tests, then f1-f6 will not be exaclty constant, probably rather they should be represented by probability density functions (PDFs) (aleatory uncertainty) and/or intervals (epistemic uncertainty).
If the experimental uncertainty is not considered when developing a model, such as a neural network, there is a risk for overfitting and that the model unconsciously is calibrated to include experimental uncertainties.
In addition to experimental uncertaintiy, there is also different types of uncertainties related to the model itself: 1) Input/parameter uncertainty, 2) numerical approximations, and 3) model form uncertainty due to slected model structure, assumptions, simplifications.
If these model uncertainties are not considered when using the model for predictions, the predictions cannot be said to have high credibility, at least not in a formal context.
I can recommend this book:
High-consequence areas like nuclear power and climate have come far with advanced methods for uncertainty quantification (UQ). Personally, I'm working on approximate/simplified methods for UQ of simulation models of dynamic physical systems, for use in early phases of system development (aeronautical context).
  • asked a question related to Artificial Neural Network
Question
11 answers
I am asking the community here to get a brief overview if there exist a method or algorithm, that enables me to transform a traditional perceptron based feed forward artificial neural network into its open equation form?
The closed for of an equation for a linear system for example is
y =  kx + d and its open form is  R = y - kx  - d
Any idea or hint is appreciated.
Kind regards and thank you in advance.
Relevant answer
Answer
The input-output function is fairly straightforward to describe:
y = f( W z + b)  where y is the vector of outputs, z the vector of values (= activations) of the hidden units, W is the weight matrix, b a bias vector and f(.) a fixed non-linear function (e.g. a sigmoid).
In turn, and assuming a single hidden layer, z is defined from the input vector x using the same equation form (but not necessarily the same values for W and b, nor f possibly):
z = f(W' x + b')
Plugging z into the first equation gives you the input-output function.
The same reasoning applies if you want to consider more hidden layers.
So, once the ANN is estimated, there is no big mystery in what it is computing (although such a close form is not necessarily straightforward to interpret).
  The "black-box" nature of such a model is more likely referring to the optimization problem which needs to be solved to estimate the parameters (W + b values, for each layer). This problem is non-convex, the objective function is highly non-linear and exhibits many local optima. The training algorithm also typically includes many meta-parameters. So, it is difficult to characterize, for a given training dataset, to which parameter values the learning algorithm will converge. Nevertheless, once the learning is done and hence the parameters fixed, the input-output relationship is well known (see above).   
  • asked a question related to Artificial Neural Network
Question
6 answers
Artificial Neural Network and rule-based systems are types of knowledge-based system. So, can we consider ANFIS (Adaptive Neuro-Fuzzy Inference Systems) as knowledge-based systems?
Relevant answer
Answer
Sorry but it's not the case !!! Knowledge based systems are not learning database systems like neural networks and others are. A knowledge based system is based on rules and mathematical proof that explains and manages clear knowledge (concept classes, concept instances, rules, properties, ...And so on), this is not done by neural networks. The main difference is the neural network cross validation (because not every data can be sure to be in the non linear function) which is not required by ANFIS and then is very much more efficient that any existing neural networks even deep learning things. ANFIS is for me one of the best powerful learning systems, but it is not knowledge based  in the scientific point of view, it is just a data learning system. The force of ANFIS is using neural network structures to learn from input/output data and create a fuzzy inference systems. But rules of the fuzzy inference system are hidden and not clearly extracted ! A way to explore :-)
  • asked a question related to Artificial Neural Network
Question
10 answers
Hello Everyone, I am working on data prediction using ANN method. I need to know, is there any specific rules/references/ mathematical formulas to understand the minimum number of observations required to apply ANN techniques to predict a data. Note that, I have 80 observations of soil properties (14 variables) to predict SPT value. Can I get any Idea?
Relevant answer
Answer
Dear Masrur,
I do not know about any paper that suggest minimum number of data for ANN. But based on my experience at least 20 data is required for modelling. You can check some of my papers. If you have 80 data, in my opinion that is enough for ANN modelling.
I hope this answer would be helpful for you.
  • asked a question related to Artificial Neural Network
Question
2 answers
I am trying to carry out classification using artificial neural network in R software using Landsat images. I have read lots of articles but most use MATLAB. Can anyone point me to the right direction that can show me how I can do this?
Relevant answer
Answer
In GNU/R, some of the packages that can be used for ANN are neuralnet, nnet and RSNNS. 
For handling images, maps various libraries and grass is available.
Use the taskviews approach
  • asked a question related to Artificial Neural Network
Question
4 answers
There are many intelligent models developed in computer engineering, including fuzzy logic, genetic algorithm, artificial neural network, and more recently, ‎Adaptive Neuro-Fuzzy Inference System ‎. My question is: to what extent are these tools applied in petroleum engineering? and more specifically, how much familiar are petroleum engineers with these tools?
Relevant answer
Answer
See this
  • asked a question related to Artificial Neural Network
Question
2 answers
please explain your answer in simple wordings.
Relevant answer
Answer
Firing rate is related to the number of spikes generated by a neuron per unit of time. Temporal encoding has a higher capacity to compress information rather than rate-encoding, this is probably the preference. It is also believed that temporal encoding is present in the brain due to our fast pattern recognition capacity.  Some works suggest that this process is only explained using the action potential timing as codification methodology since the analogue pattern match is done in a time scale of the order of dozens of milliseconds and biological neurons are oscillating at only 100 Hz. These are the results observed in different studies related to visual pattern analysis and pattern classification carried out by macaque monkeys and fixing the time response in just 20–30ms. Since the firing rate of neurons are usually below 100Hz, a coding of analogue variables by firing rates is traditionally considered to be dubious for pattern recognition.
  • asked a question related to Artificial Neural Network
Question
4 answers
I trained my neural network and computed MAD,MAPE and RMSE for different number of hidden nodes. My question is that the coefficient of determination ( R square) is negative for all hidden number of neurons that i tested. The code is :
set.seed(1234)
R<-function(h.size){
net<-nnet(mande ~ bed + bes + ATM + salery.day.effect+ off.day.effect+ week.dat+ work.day, data=train.data, size=h.size, decay=5e-4, maxit = 500, entropy = TRUE, Hess = TRUE)
ydi<- test.data$mande
yi<- predict(net,test.data)
 ym<-mean(test.data$mande)
R<-1- ((sum((yi-ydi)^2))/(sum((ydi-ym)^2)))
c(h.size, R)
}
R<-t(sapply(2:20, FUN=R))
the output is: 
[,1] [,2]
[1,] 2 -0.07756268
[2,] 3 -1.45965479
[3,] 4 -1.64501223
[4,] 5 -0.79321810
[5,] 6 -0.54140203
[6,] 7 -0.67961167
[7,] 8 -0.83259432
[8,] 9 -0.77899111
[9,] 10 -0.57165255
[10,] 11 -0.76664634
[11,] 12 -0.88438609
[12,] 13 -0.76306553
[13,] 14 -0.57160636
[14,] 15 -0.56628827
[15,] 16 -0.70073809
[16,] 17 -0.71454018
[17,] 18 -0.71002645
[18,] 19 -0.66231219
[19,] 20 -0.53359629
i did right? why all R square's  are negative? what should i do ? Thanks
Relevant answer
Answer
Dear Ghazaleh 
R square compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then R square is negative.  R square is not always the square of anything, so it can have a negative value without violating any rules of math. R square is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line.
So my advise is to change the model that best fits your data good luck
  • asked a question related to Artificial Neural Network
Question
6 answers
Imagine that you have several emotions as predictors (input layers) of a certain output (lets say... wellbeing). You create an ANN and the best architecture contains 5 hidden nodes.
To determine the contribution of the individual emotions, apart from sensitivity analysis, can we also measure the value of the weights of the connections between them and the hidden nodes?
And if the contribution of a given emotion to the most important hidden nodes is inhibitory, can we assume that the relation of that emotion and the outcome is negative?
TIA
Relevant answer
Answer
Dear Rita,
There is a real number associated with each connection in ANN, which is called the weight of the connection. We denote by Wij the weight of the connection from unit ui to unit uj. It is then convenient to represent the pattern of connectivity in the network by a weight matrix W whose elements are the weights Wij. Two types of connection are usually distinguished: excitatory and inhibitory. A positive weight represents an excitatory connection whereas a negative weight represents an inhibitory connection.
This is the general explanation of negative weights others explains it as the neurons or "nodes" of an ANN correspond to the excitatory neurons of the brain. The anti-correlations represented by the negative weights may instead be represented by parallel opposing populations of excitatory neurons which are kept separated and anti-correlated by inhibitory neurons that connect between them. 
So it is hard to say that having negative values means negative emotions. I better try different ANN architecture. 
What ANN architecture are you using? 
  • asked a question related to Artificial Neural Network
Question
9 answers
I would like to save the best network for further use and want to know when it is fully trained.
Relevant answer
Answer
I think that there is no well known method available to answer this question 100%. because all NN algorithms are approximate not exact algorithms.
The error rate is the only clue  that may lead to the conclusion trained not (fully trained) and it is approximate measure based on one sample of the population.
If your net yielded 0% error rate in training and testing, you would be satisfied, because this is all you can do.
However, you can't say that your net is fully trained, because there is no guarantee to well classify other examples from the same population outside training and testing samples.
  • asked a question related to Artificial Neural Network
Question
15 answers
Thanks in advance for your replies.
Relevant answer
Deep learning is most definitely not the end of signal nor image processing. It is just another tool to do signal or image processing that has recently been shown to be very effective if learned carefully.
  • asked a question related to Artificial Neural Network
Question
6 answers
What are some of the latest artificial neural networks techniques for stock prediction?
Relevant answer
hi Dennis, "Introduction to Neural Networks" by Dr. John A. Bullinaria may help you, it's all in there except recurrent neural network (e.g Elman, Elman-Jordan, etc)
  • asked a question related to Artificial Neural Network
Question
4 answers
I am using this method to predict the stock market.
Relevant answer
Answer
yes you can evolve weight matrices of ANN using genetic algorithm by selecting one architecture for your ANN based on training data, randomly generating weight matrices as chromosome, setting reciprocal of mean square error (same from backpropagation method) as fitness function. You can select appropriate genetic operators for your chromosome. For stock market prediction, you can get training data on any financial market website (e.g. sbimf.com). You will get to download NAV of different stocks of last 2 years, then you can convert these NAV values in the form of training data by setting first 5 days NAV as input and 6th day's NAV as output. Arrange all your NAVs for last 2 years in the form of Input and Output vector. Then you are ready to implement your algorithm either in JAVA or MATLAB. For more explanation you can consult books: AI by N.P. Padhy and Neural Networks: A Classroom Approach by Satish Kumar. Best wishes,
  • asked a question related to Artificial Neural Network
Question
6 answers
I just want to know who has first used the artificial neural networks (ANN) to detect and/or locate faults in electrical networks
Relevant answer
Answer
Dear Francesco Bonanno I haven’t work with this type of ANN, Can you tell me what favorise it’s use to fault prediction?
  • asked a question related to Artificial Neural Network
Question
6 answers
In an artificial neural network, which data normalization method is normally used? I found four types of normalization:
1. Statistical or Z- core normalization
2. Median normalization
3. Sigmoid normalization
Which normalization is best?
Relevant answer
Hello,
         Normalization means casting data set to a specific range like [0,1] or [-1,+1], but why we do that, the answer is to eliminate the influence on one factor (feature) over another, for example you have the amount of olive between 5000 ton to 90000 ton, so the range is [5000, 90,000] ton, in other side you have the temperature ranges from -15 to 49 C, the range is [-15, 49]. These two features are not in the same range, you have to cast both of them in the same range say [-1,+1], this will eliminate the influence of production on the temperature and give equal chances to both of them.
In another hand, gradient descent algorithm GDA which is the backpropagation  algorithm used in neural networks converges faster with normalized data.
If all features lay in the same range then no normalization is required. One of the drawbacks of normalization is when the data contains outliers (anomalies), because this will aggregate most of the data in a very small range and only outliers will lay on the boundaries.
Z-score, is a standardization method also used for scaling the data, its useful for data contains outliers. It makes the data to has zero mean and standard deviation =1. 
Read the link
  • asked a question related to Artificial Neural Network
Question
2 answers
Based on the cyber-physical system described in article http://palensky.org/pdf/Palensky2013.pdf
I suggest an ANN but it requires an accurate data set for its training, any other solution or suggestion for modelling cyber-physical system?
Relevant answer
Answer
Dear Professor,
Fantastic article, thanks for the suggestion.
  • asked a question related to Artificial Neural Network
Question
9 answers
I am confused with these two terminologies. Is there any difference between multilabel output and multiple outputs in the case of artificial neural networks? Please reply with some easy examples.
Thanks in advance
Relevant answer
Answer
Typically, in machine learning when you're talking of
  • multi-label you mean a classification problem whose response variable is discrete and it has a domain with cardinality > 2, i.e. not just {0,1} but for instance {A,B,C}
  • multiple outputs you mean a supervised learning problem (hence, both regression and classification problems) whose response variable has a dimension >= 2, i.e. it's a vector and not a scalar 
  • asked a question related to Artificial Neural Network
Question
2 answers
i need to develop anew VmAllocationPolicy
i need to know where i test it with VmAllocationPolicyBio
the environment where i run my work
i can test in built in examples ?? what is the workloads i use it ???
Relevant answer
Answer
Cloudsim