Science topic

Neural Networks - Science topic

Everything about neural networks
Questions related to Neural Networks
  • asked a question related to Neural Networks
Question
7 answers
I want to detect the change in time-series data using RNN-LSTM. I know how to do it with supervised learning, but unable to do the same with unsupervised learning. In this case, I have time-series data for the electricity consumption of a customer with distorted patterns after some date. So before that date, the consumption patterns seem normal (no major change). After that date, the consumption sharply decreased as compared to the previous patterns. Here, I want to detect the change on the day when the pattern seems not normal. I am searching for a way forward for my problem.
Relevant answer
Answer
I would give Prophet (https://facebook.github.io/prophet/) a try. Works very well and extremely easy to use with nice visualizations. Behind the scenes, it uses different methods including LSTTM/RNN.
  • asked a question related to Neural Networks
Question
3 answers
Dear colleagues,
I am trying to find the best method with neural networks to classify and calculate cells from microscope feedback images. I need to create a simple network as a start. I do not want a complicated pre-trained model.
TIA :)
Relevant answer
Answer
Automated cell classification is an important yet a challenging computer vision task with significant benefits to biomedicine. In recent years, there have been several studies attempted to build an artificial intelligence-based cell classifier using label-free cellular images obtained from an optical microscope.
Regards,
Shafagat
  • asked a question related to Neural Networks
Question
4 answers
I have created a deep neural network for the classification of breast lesions. There are two possible outcomes. The lesion is either benign or malignant. My output layer is defined as below:
model.add(Dense(2, activation='sigmoid'))
My query is that should the number of layers in the output layer be 2 or 1? I have read a few articles where in some cases, the number of neurons in the output layer is the same as the number of class labels. Whereas in some blogs, the number of neurons should be 1 for a 2 class label(binary) problem. Any suggestions would be appreciated.
Relevant answer
Answer
Dear Warid,
as you already mentioned both of the cases can be implemented. However, please note that for the such problem which is a binary classification problem if you use 2 neurons in the output layer, you should use 'Softmax' as the activation function of the output layer, and if you use 1 neuron, 'Sigmoid' would be a good choice.
Generally, for a classification problem, it is common to have neurons in the number of classes, in the output layer which is thus followed by a 'Softmax' layer.
Hope it helps you!
  • asked a question related to Neural Networks
Question
2 answers
If there were the quantum mechanical equivalents of individual neurons and of larger networks of neurons, and if quantum mechanisms of error correction worked on those level, you could get something like consciousness. This is because information could (in principle) flow between neurons - that means you have a mechanism of some sort of distributed computing inside the brain. What's your view?
An alternate (rather elaborate) discussion about the two can be found below. However this particular idea just emerged once I started rethinking about information in general.
Relevant answer
Answer
Navjot Singh I think you are absolutely right to conclude that the key to understanding the operation of the brain is a better understanding of fundamental physics.
However I don’t think quantum theory will help. The Spacetime Wave theory indicates that there are two ways in which neurons can affect each other. One way is direct neuron network connection and the other way is the collective effect of electromagnetic wave action.
Richard
  • asked a question related to Neural Networks
Question
2 answers
Cross-Validation for Deep Learning Models
Relevant answer
Answer
Train with different learning ratios (higher suggested), I mean do some shuffling. As k-fold validation process sometimes help to sort out overfitting issues as it's own, but taking values from past iterations may create difficulty. You'll be able to get 95% or even more accuracy with generalised validation process but need to play with it changing between hyperparameters to achieve more desired outcome.
  • asked a question related to Neural Networks
Question
6 answers
#Neural networks
#Algorithms
#MachineLearning
Relevant answer
Answer
As per my understanding, it is not possible to reconstruct the exact input from the output at last layer even if we do not choose any complex activation functions at intermediate layers that change the actual output of neurons. This is because when you reconstruct the immediate previous layer from the current layer output, your weight matrix may not be a square one that prevents you getting the exact solution of the input x (x = np.linalg.lstsq(A, b)). This is what is my understanding. If any of you have a solution to this problem, please post your solution here. I am interested too.
Thanks & regards
Bikash
  • asked a question related to Neural Networks
Question
3 answers
For example, to classify the data in Python or such programs, we use 0001; 0010; 0100; and 1000 digits for four classes. Can we simply use 1; 2; 3 and 4 in MATLAB? This will help me diagnose if I have multiple classes.
Relevant answer
Answer
For machine learning classifiers, I suggest you keep the "binary" encoding of output data (in fact, you seem to use the "one-hot" encoding, since the corresponding binary numbers would be 001, 010, 011, 100).
In the one-hot encoding all class outputs are equally far away from the "origin", i.e. they are equivalent from the network's point of view. Assigning decimal numbers to classes, you force the network to predict "larger" values for some classes and "smaller" values for others, while the output scale does not carry any information in reality. I think this could lead to worse performance or slower convergence.
If decimal encoding helps some other stage of your workflow, you can of course transform the network outputs to decimal afterwards.
  • asked a question related to Neural Networks
Question
6 answers
Does Neural Networks handle a multicollinearity?
Relevant answer
Answer
Artificial Intelligence
thank you very much for your response )
  • asked a question related to Neural Networks
Question
5 answers
Physics-Informed Neural Networks was proposed in recent years has developed rapidly, but will be recognized in various fields quickly accepted, perhaps these challenges need to be considered: (1) Preprocessing point distribution, especially 3D problem (2) How to apply accurately boundary conditions? It is hoped that the related research progress of the Physics-Informed Neural Networks can be discussed below.
Relevant answer
Answer
Hi Bo Yu and Li Liu, I'm recently working on the PINN's applications. I have some ideas for the strong boundary condition.
I would recommend the to introduce the concept of functional into neural networks, that is, a neural network is the functionof other neural networks and indepedent variables. In such a way we could have a neural network which initially fulfills the boundary conditions.
For instance, if there is a Dirichlet boundary condition for the solution u(x), which is u(0) = c. We firstly define a neural network NN1(x). Then another neural network, which is formed with NN2(x) = c + xNN1(x), would initially filfill the boundary conditions, because you can substitute x =0 into NN2, you will get NN2(0) = c.
Likewise, for Neumann boundary condition, for instance, u'(0) = k. We can define NN2(x) = kx+x^2NN1(x).
  • asked a question related to Neural Networks
Question
2 answers
Among all the Neural Network structures that are introduced, RNN has received noticeable attention because of the state art included in its gradient computation with backpropagation. On the other hand, we have the invention of autograd as a gradient computation tool in many programs like Pytorch, Tensorflow,... which takes care of analytic computation of Network's gradient with chain rule and backpropagation. But even the invention of autograd doesn't impact the superiority of RNN over many other structures, because computer takes care of gradient computation with no care about the network structure which can result in a super complicated and unnecessary computations. Therefore, even applying chain rule with no inclusion of the state of art in Gradient computation can be still difficult and computer codes can crash in gradient computation. For example consider the RNN structure, if you introduce it directly to autograd with no hint of the presence of the state of art that is available for it, then autograd will have a hard time to compute its gradient, but if you hint the autograd of the specific method which exists for RNN, then the autograd will compute the gradient easily. So my question is:
I am going to introduce a network structure to autograd which includes RNN or LSTM, if I introduce the network in the standard way then it would be hard for autograd to compute its gradient, but if I tell the autograd that a specific part of my network structure is RNN, then it will compute it easily due to pre-provided tools in the autograd for famous network structures like RNN or LSTM. So How can I do that? How should I inform the autograd that my network structure has LSTM embedded inside of it?
Relevant answer
Answer
Navid Hashemi , if I understand your question correctly, you could just use the LSTM functionality on Tensorflow (https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) or Pytorch (https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html)which should specify your system as an LSTM.
  • asked a question related to Neural Networks
Question
3 answers
I want to implement a functional link neural network in python language. Are there any libraries available in python?
Relevant answer
Answer
I think you can get the code from here: https://github.com/chasebk/code_FLNN
  • asked a question related to Neural Networks
Question
3 answers
I am beginning my studies with the goal of applying neural networks to detect anomalies in network packets. My question is in which models I can start my research and which metrics I should consider in order to minimize the occurrence of false negatives. Since false negatives indicate that a threat was not identified by the network, I was considering using the recall value.
Relevant answer
Answer
On this site (https://ff12.fastforwardlabs.com/) you will find a book (Deep Learning for Anomaly Detection), and I hope you will benefit from it
  • asked a question related to Neural Networks
Question
8 answers
Hello,
In neural network pruning, we first train the network. Then, we identify redundant parts and remove them. Usually, network pruning deteriorates the model's performance, so we need to fine-tune our network. If we determine redundant parts correctly, the performance will not deteriorate.
What happens if we consider a smaller network from scratch (instead of removing redundant parts of a large network)?
Relevant answer
Answer
Thank you so much for your response.
Consider I have a model which has achieved the accuracy of 80%. Then, I prune a fourth of this network and fine-tune it. This pruned model achieves an accuracy of 79%. Now, my question is that if I initialize my pruned network randomly and train it from the scratch, will my model achieve 79% again?
  • asked a question related to Neural Networks
Question
8 answers
Three papers to start learning about science behind graph neural networks and how they work. These papers will be easy to ready if you are familiar with, not necessarily expert in, neural networks and machine learning. There are many more papers on graph neural network. Feel free to share more with your network in a message on this post. I will do the same. A Comprehensive Survey on Graph Neural Networks ( ) Kernel Graph Convolutional Neural Networks ( ) Geom-GCN: Geometric Graph Convolutional Networks ( )
Relevant answer
Answer
Thanks to dear Dr. Madani
Perhaps these two articles are also helpful
  • asked a question related to Neural Networks
Question
6 answers
Hello,
I've pruned my CNN layer-by-layer in two steps.
First, I removed a fourth of the filters of some desired layers, which led to performance degradation of around 1%. Then, I selected two layers that had the highest number of filters and caused the least performance deterioration. Next, I removed half of their filters. The second model performed even better than the original model. What is the reason?
Relevant answer
Answer
First of all, you need to quantify uncertainty of your prediction and find out if it's meaningful at all or just a result of stochastic processes inside the model. Different metrics are not a single point estimate, it is a distribution. Therefore it may vary widely depending on task, model, data split, hyoereparams, etc.
A great article on uncertainty in ML here: https://arxiv.org/abs/2103.03098
Pruning should not improve the performance, therefore my bet is that it is inside the distribution of possible model accuracies.
  • asked a question related to Neural Networks
Question
11 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
I am assuming your data are images since you mentioned image processing and thus deep CNN models are the state of the art and can produce good results if and only you have a great amount of training data. If your data is small in size then just go ahead with regular neural networks like multi layer perceptrons (MLP) with one or maximum two hidden layers. Now, if your data is just tabular data (csv file) then I don’t recommend using neural networks like CNN or MLP at all. You can simply use traditional machin learning algorithms like random fores, support vector machine, or K-nearest neighbor.
  • asked a question related to Neural Networks
Question
5 answers
I'm working on my thesis, and I need to use PID tuned with a neural network. I am facing difficulty tuning it.
  • asked a question related to Neural Networks
Question
8 answers
How to connect data collected using IoT (Big Data) to a neural network? Example: 24 hours the patient's pressure is measured using the so-called portable holter. Data is transmitted via a sensor to the server in the form of Big Data. How to transfer data to the neural network?
Relevant answer
Answer
Thx a lot Faraed
  • asked a question related to Neural Networks
Question
4 answers
while I go through the neural network analysis in R by using the "neuralnet" package the message is showing like "Error in err.deriv2[, v] : incorrect number of dimensions"
I use the command "confidence.interval(nn)" in the R script.
Please help in the matter.
Relevant answer
Answer
Atanu Panja how is this "research methodology"? You asked a question about code, this is not the appropriate platform, and to be fair it goes against RG guidelines
  • asked a question related to Neural Networks
Question
3 answers
I am trying to read water meter reading through OCR, however, my first step is to find ROI. I found a dataset from kaggle with the labelled data for the ROI. But they are not in rectangle, rather in polygon shape, some with 5 point, and some with 8 depending on the image. How do I convert this to yolo format?
For example: file name | value | coordinates
id_53_value_595_825.jpg 595.825 {'type': 'polygon', 'data': [{'x': 0.30788, 'y': 0.30207}, {'x': 0.30676, 'y': 0.32731}, {'x': 0.53501, 'y': 0.33068}, {'x': 0.53445, 'y': 0.33699}, {'x': 0.56529, 'y': 0.33741}, {'x': 0.56697, 'y': 0.29786}, {'x': 0.53501, 'y': 0.29786}, {'x': 0.53445, 'y': 0.30417}]}
id_553_value_65_475.jpg 65.475 {'type': 'polygon', 'data': [{'x': 0.26133, 'y': 0.24071}, {'x': 0.31405, 'y': 0.23473}, {'x': 0.31741, 'y': 0.26688}, {'x': 0.30676, 'y': 0.26763}, {'x': 0.33985, 'y': 0.60851}, {'x': 0.29386, 'y': 0.61449}]}
id_407_value_21_86.jpg 21.86 {'type': 'polygon', 'data': [{'x': 0.27545, 'y': 0.19134}, {'x': 0.37483, 'y': 0.18282}, {'x': 0.38935, 'y': 0.76071}, {'x': 0.28185, 'y': 0.76613}]}
Relevant answer
Answer
Muhammad Ali Thank you.
  • asked a question related to Neural Networks
Question
7 answers
I working on neural data proccessing
Relevant answer
Answer
Any progress? I have similar questions. I want to map two vectors that has nonlinear relationship. Elements of vector 1 can be represented by vector 2, and I want to map vector 1 to vector 2. If I solve it analytically, there'll be infinity solutions.
  • asked a question related to Neural Networks
Question
15 answers
Big-data-based artificial intelligence (AI) supports profound evolution in almost all of science and technology. However, modeling and forecasting multi-physical systems remain a challenge due to unavoidable data scarcity and noise. Improving the generalization ability of neural networks by "teaching" domain knowledge and developing a new generation of models combined with the physical laws have become promising areas of machine learning research. It is hoped that the related progress and challenging issues of the Physics-Informed Neural Networks can be discussed below.
Relevant answer
Answer
It is a very interesting topic while we see that most of the recently physics-informed neural networks just investigate the analytical equations with lower nonlinear degrees. However, for some areas such as my area, i.e. gas dispersion and explosion which requires very complex NS equations to solve the physics, the integration with each other is very complex and difficult. This is a very tough while interesting issue. By the way, we recently submited a work using prior physical knowledge to constraint the loss function, which significantly improve the NN's performance. In addition, we have already constructed a PINN in which the physical equantions behave like a supervisor or teacher while NN behaves like a student. We used the domain adapation technique to let the 'supervisor' to iteractively teach the student. This algorithms significantly improve the NN performance with very limited training dataset. We are investigating the interpretability of this algorithm. We believe PINN will play a significant role regarding inference speed and generalization improvement when we have not enough data in the new region. Thanks.
  • asked a question related to Neural Networks
Question
4 answers
I want to make a LSTM neural network that predicts next values of packets over MODBUS protocol. I have converted pcap files to decimal numbers but I don't know what to do next. Any opinion ?
Relevant answer
  • asked a question related to Neural Networks
Question
4 answers
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
Relevant answer
Answer
Dear Murana Awad,
Application of PSO-BP Neural Network in GPS Height Fitting
  • asked a question related to Neural Networks
Question
4 answers
hello guys!
Can anyone tell me how I can ensemble a neural network. I use the patternet type. if some one know please help me. I am doing my code using Matlab. Can anyone please help with my code?
hope to get a reply from you guys.
thank you
Relevant answer
Answer
DEAR Chandrima debnath, please consider these links
  • asked a question related to Neural Networks
Question
2 answers
Here I am attaching MATLAB code written for calculation of Mean squared error and to plot regression.
The code is working well in the single output ANN file and it is giving correct values.
The code written in multi output ANN file is not giving the correct values (2 outputs need to be predicted). Could you please help in correcting the code?
The mean square error and the regression plot need to corrected?
Please help in calculating the mean squared error in ANN with 2 outputs and correct plot for regression.
Please see the images below.
Relevant answer
Answer
Manikanta Reddy Neural networks are intricate models that attempt to replicate how the human brain produces categorization rules. A neural network is made up of many distinct layers of neurons, with each layer taking input from previous layers and transferring output to subsequent layers.
  • asked a question related to Neural Networks
Question
5 answers
hello!
can anyone tell me how i can integrate the neural network and genetic algorithm. i have a simple real coded genetic algorith where i need to use the neural network. please help me on thia. my code is in matlab. if some one provide me the code
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Neural Networks
Question
7 answers
I didn't understand the exact working algorithm of the TensorFlow lattice layer. Can anybody please help me here to understand the concept behind TensorFlow lattice with an example?
Thank you
  • asked a question related to Neural Networks
Question
4 answers
HOW TO use Convolutional Neural Networks on google earth engine
Relevant answer
Answer
It can be used via GEE-Python API.
  • asked a question related to Neural Networks
Question
7 answers
hello guys!
i am working on neural network to generate an ensemble learning by using loop by adjusting different parameters. but i want to save every performance. how i can do this in matlab. please help.
Relevant answer
Answer
You can store the object in an array.
After the learning use that array
  • asked a question related to Neural Networks
Question
7 answers
Whats the best way of encoding human psychological parameters into some numerical estimation which i can pass on to the neural network. For example I have measured Anxiety lets say ... but because there anxiety is quite relative and have some uncertainity it wont be a one to one coorespendce to numerical quantity. I was looking into the Z numbers for this but really isnt sure whether it could be used for training ... I intend to use many such physcological parameters like measure of confidence. How to handle these parametrs
Relevant answer
Answer
It seems that what you are looking for has been known since 1932 as a "Likert scale", which was used in hundreds of thousands of psychometric studies, including thousands of machine-learning studies. A Google Scholar search for "likert scale" provides 643,000 results ; "likert scale" AND anxiety: 572,000 results; "likert scale" AND "machine learning": 28,700 results; "likert scale" AND "artificial neural networks": 3810 results; "likert scale" AND "machine learning" AND anxiety: 8,040 results; "likert scale" AND "artificial neural networks" AND anxiety: 788 results. Thus, you should find a lot of relevant answers and valuable information by performing a good literature search.
  • asked a question related to Neural Networks
Question
4 answers
Does anybody have RBF, GRNN, and MLP Neural Network Matlab code?
I have five input variables and two output variables, I want to predict the data by these neural networks and compare the result. please share it.
Relevant answer
Answer
u can try search in github. for example https://github.com/search?l=MATLAB&q=grnn&type=Repositories
  • asked a question related to Neural Networks
Question
5 answers
I was reading chapter from Fundamental Concepts of Convolutional Neural Network and could not get through explanation about how weight sharing between layers works in Convolution Neural Networks?
Relevant answer
Answer
Dear Nitesh,
The answer and photo shared by Mansurbek Urol ugli Abdullaev is quiet explanatory. A CNN, is comprised of multiple filters. The filters are actually feature detectors. Each filter highlights a specific feature present in the image such as edges, vertical lines, horizontal lines. These filters are usually 2*2 or 3*3 matrices of weights that move across the image (layer input) in a sliding window pattern.
The weight sharing is referred to using fixed weights for each filter across the entire input.
  • asked a question related to Neural Networks
  • asked a question related to Neural Networks
Question
5 answers
Can any one provide me with MATLAB codes for fuzzy neural network?
Relevant answer
Answer
Another suggestion is installing this chrome extension. It will help you locate AI/ML codes in papers while searching.
  • asked a question related to Neural Networks
Question
3 answers
Dear collegues,
I try to a neural network.I normalized data with the minimum and maximum:
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
and the results:
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result).
So I can see the actual and predicted values only in normalized form.
Could you tell me please,how do I scale the real and predicted data back into the "unscaled" range?
P.s. minvec <- sapply(mydata,min)maxvec <- sapply(mydata,max)
> denormalize <- function(x,minval,maxval) {
+ x*(maxval-minval) + minval
doesn't work correct in my case.
Thanks a lot for your answers
Relevant answer
Answer
It actually works (but you have to consider rounding errors):
normalize <- function(x, min, max) (x-min)/(max-min)
denormalize <- function(x, min, max) x*(max-min)+min
x <- rnorm(1000)
r <- range(x)
nx <- normalize(x, r[1], r[2])
dnx <- denormalize(nx, r[1], r[2])
all(x == dnx)
# FALSE ---> rounding errors
all(abs(x - dnx) < 1E-8)
# TRUE ---> identical up to tiny rounding errors
  • asked a question related to Neural Networks
Question
3 answers
What is the complexity to run Deep Convolutional Neural Network (DCNN) for 500 epochs?
Relevant answer
Answer
I would recommend to read this article:-
  • asked a question related to Neural Networks
Question
3 answers
I want to conduct a Bangla sentiment analysis study on a dataset that contains approximately 5000 data to be used for training and testing. Primarily, I suppose to use Support Vector Machine(SVM), Naive Bayes, and Neural Network and may use some other classifiers in future according to the performance. My question is what should be the best way to organize my dataset so that I can perform all the operations with ease?
Regards
Relevant answer
Answer
use this data set for NLP operations and then sentiment analysis:
  • asked a question related to Neural Networks
Question
17 answers
According to which article, research or reference, in learning process of neural networks, 70% of the dataset is usually considered for training and 30% of its for testing?
In other words, who in the world and in which research or book for the first time raised this issue and explained it in detail?
I desperately need a reference to the above.
Relevant answer
Answer
I believe the goal here is to prevent over-fitting your model. As suggested by other researchers, this is not a fixed a value. In fact on my case I normally do 20% testing and 80% training.
  • asked a question related to Neural Networks
Question
32 answers
One of a machine learning algorithms- called Neural network is an imitation of the human brain. Similar to other machine learning algorithm, the model may end up overfitting or underfitting data. In the first case poor generalizations while in the second case inaccurate predictions. Investigating these problems is not difficult for the neural network.
My question is, what is the mechanism to investigate whether the human brain learning is uderfitting or overfitting data?
Relevant answer
Answer
Dear Tamirat,
The overfitting happens in our brain when we memorize something, we know it very well but if it changes a bit, we won't be able to figure it out then.
In the underfitting case, we didn't investigate the concept very well yet and we can not have an opinion about it. It is like we want to understand something but we do not try to deeply learn about it, so the level of our understanding would be shallow.
In conclusion, I think the answer is it depends on the learning process, whether we memorize (overfit), fully understand every aspect of it (fit) and have the ability to extract the right patterns of the concept or have a shallow understanding of it (underfit).
  • asked a question related to Neural Networks
Question
4 answers
Who has used one-dimensional convolutional neural network (CNN)?
Relevant answer
Answer
A convolution layer accepts a multichannel one dimensional signal, convolves it with each of its multichannel kernels, and stacks the results together into a new multichannel signal that it passes on to the next layer. deep learning methods such as recurrent neural networks and one-dimensional convolutional neural networks, or CNNs, have been shown to provide state-of-the-art results on challenging activity recognition tasks with little or no data feature engineering, instead using feature learning on raw data.
  • asked a question related to Neural Networks
Question
1 answer
Dear collegues.
I would like to ask,if anybody works with neural networks,to check my loop for the test sample.
I've 4 sequences (with a goal to predict prov,monthly data,22 data in each sequence) and I would like to construct the forecast for each next month with using training sample size 5 months.
It means, I need to shift each time by one month with 5 elements:
train<-1:5, train<-2:6, train<-3:7...,train<-17:21. So I need to get 17 columns as a output result.
The loop is:
shift <- 4
number_forecasts <- 1
d <- nrow(maxmindf)
k <- number_forecasts
for (i in 1:(d - shift + 1))
{
The code:
require(quantmod)
require(nnet)
require(caret)
prov=c(25,22,47,70,59,49,29,40,49,2,6,50,84,33,25,67,89,3,4,7,8,2)
temp=c(22,23,23,23,25,29,20,27,22,23,23,23,25,29,20,27,20,30,35,50,52,20)
soil=c(676,589,536,499,429,368,370,387,400,423,676,589,536,499,429,368,370,387,400,423,600,605)
rain=c(7,8,2,8,6,5,4,9,7,8,2,8,6,5,4,9,5,6,9,2,3,4)
df=data.frame(prov,temp,soil,rain)
mydata<-df
attach(mydata)
mi<-mydata
scaleddata<-scale(mi$prov)
normalize <- function(x) {
return ((x - min(x)) / (max(x) - min(x)))
}
maxmindf <- as.data.frame(lapply(mydata, normalize))
go<-maxmindf
forecasts <- NULL
forecasts$prov <- 1:22
forecasts$predictions <- NA
forecasts <- data.frame(forecasts)
# Training and Test Data
trainset <- maxmindf()
testset <- maxmindf()
#Neural Network
library(neuralnet)
nn <- neuralnet(prov~temp+soil+rain, data=trainset, hidden=c(3,2), linear.output=FALSE, threshold=0.01)
nn$result.matrix
plot(nn)
#Test the resulting output
#Test the resulting output
temp_test <- subset(testset, select = c("temp","soil", "rain"))
head(temp_test)
nn.results <- compute(nn, temp_test)
results <- data.frame(actual = testset$prov, prediction = nn.results$net.result)
}
minval<-min(x)
maxval<-max(x)
minvec <- sapply(mydata,min)
maxvec <- sapply(mydata,max)
denormalize <- function(x,minval,maxval) {
x*(maxval-minval) + minval
}
as.data.frame(Map(denormalize,results,minvec,maxvec))
Could you tell me please,what can i add in trainset and testset (with using loop) and how to display all predictions using a loop so that the results are displayed with a shift by one with a test sample of 5?
I am very grateful for your answers
  • asked a question related to Neural Networks
Question
11 answers
Hello everyone,
Could you recommend papers, books or websites about unsupervised neural networks?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
  • asked a question related to Neural Networks
Question
4 answers
Following is the error faced while training the deep learning CNN model on the dataset. The dataset is of MRI images of prostate cancer in men and have dcm extension. I tried training the model with the dcm extension as well as by converting it to jpg but the error remains the same. Could anyone help me solving this error?
Code for reference:
import pydicom as dicom
PNG = False
folder_path = "/tmp/Prostate_dataset/Prostate_dataset-1"
jpg_folder_path = "/tmp/prostate_jpg"
images_path = os.listdir(folder_path)
for n, image in enumerate(images_path):
ds = dicom.dcmread(os.path.join(folder_path, image))
pixel_array_numpy = ds.pixel_array
if PNG == False:
image = image.replace('.dcm', '.jpg')
else:
image = image.replace('.dcm', '.png')
cv2.imwrite(os.path.join(jpg_folder_path, image), pixel_array_numpy)
if n % 50 == 0:
print('{} image converted'.format(n))
def get_data(path):
data = []
for img in os.listdir(path):
img_arr=os.path.join(path,img)
data_img=cv2.imread(img_arr)
data.append(data_img)
return np.array(data)
train = get_data(jpg_folder_path)
val = get_data(jpg_folder_path)
# Normalize the data
train = np.asarray(train)
train = train.reshape(-1, 256,256,1)
train = train.astype('float64')
train /= 255.0
val = np.asarray(val)
val = val.reshape(-1, 256,256,1)
val = val.astype('float64')
val /= 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(128, (5,5), activation='relu', input_shape=(256,256,1)),
tf.keras.layers.BatchNormalization(axis=-1),
tf.keras.layers.Dropout(.5, noise_shape=None, seed=None),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (5,5), activation='relu'),
tf.keras.layers.BatchNormalization(axis=1),
tf.keras.layers.Dropout(.5, noise_shape=None, seed=None),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(16, (5,5), activation='relu'),
tf.keras.layers.BatchNormalization(axis=1),
tf.keras.layers.Dropout(.5, noise_shape=None, seed=None),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(8, (5,5), activation='relu'),
tf.keras.layers.BatchNormalization(axis=1),
tf.keras.layers.Dropout(.5, noise_shape=None, seed=None),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(256, activation='softmax'),
])
model.summary()
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train, y_train, epochs = 10, verbose = 1, validation_data = (X_test, y_test))
test_loss = model.evaluate(X_test, y_test)
Error:
ValueError: in user code: /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:835 run_step ** outputs = model.train_step(data) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:789 train_step y, y_pred, sample_weight, regularization_losses=self.losses) /usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py:201 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) /usr/local/lib/python3.7/dist-packages/keras/losses.py:141 __call__ losses = call_fn(y_true, y_pred) /usr/local/lib/python3.7/dist-packages/keras/losses.py:245 call ** return ag_fn(y_true, y_pred, **self._fn_kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper return target(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/losses.py:1666 categorical_crossentropy y_true, y_pred, from_logits=from_logits, axis=axis) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper return target(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/backend.py:4839 categorical_crossentropy target.shape.assert_is_compatible_with(output.shape) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:1161 assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 256, 256, 1) and (None, 256) are incompatible
Relevant answer
Answer
I think there might be a problem in python version you are using, try to use python 2 instead of python 3
In addition, have a look here also:
  • asked a question related to Neural Networks
Question
7 answers
Hello
Dear researchers,
Which neural network ( ResNet or UNet) do you suggest if the input data of the model is the image-based data and the purpose of the model is classification? Why?
If the goal is segmentation with image-based data, then which deep neural network(Resnet or U net) do you suggest for the predictive model? Why?
My focus is just two deep neural networks such as ResNet or UNet
Thank you for your guidance
I am waiting for your guidance
Relevant answer
Answer
Thanks for asking this good question. In my case study, UNet performed better than VGG16, VGG19 and ResNet. Actually, UNet is "a convolutional neural network architecture that expanded with few changes in the CNN architecture".
  • asked a question related to Neural Networks
Question
13 answers
I'm searching about autoencoders and their application in machine learning issues. But I have a fundamental question.
As we all know, there are various types of autoencoders, such as ​Stack Autoencoder, Sparse Autoencoder, Denoising Autoencoder, Adversarial Autoencoder, Convolutional Autoencoder, Semi- Autoencoder, Dual Autoencoder, Contractive Autoencoder, and others that are better versions of what we had before. Autoencoder is also known to be used in Graph Networks (GN), Recommender Systems(RS), Natural Language Processing (NLP), and Machine Vision (CV). This is my main concern:
Because the input and structure of each of these machine learning problems are different, which version of Autoencoder is appropriate for which machine learning problem.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Neural Networks
Question
1 answer
Dear colleagues, I've tried to construct some recurrent neural network with using learning sample size 25 and I would like to get 178 columns in output as a result (there are 25 columns and 178 linesin the learning sample),but I can use predict ony for a single item :
pred<-predict(fit,inputs[-train[106,]]),so I need to change the numbers in train to get a column with forecast.
sslog<-as.ts(read.csv("k.csv"))
mi<-sslog
shift <- 25
S <- c()
for (i in 1:(length(mi)-shift+1))
{
s <- mi[i:(i+shift-1)]
S <- rbind(S,s)
}
train<-S
y<-as.data.frame(S, row.names=FALSE)
x1<-Lag(y,k=1)
x2<-Lag(y,k=2)
x3<-Lag(y,k=3)
x4<-Lag(y,k=4)
x5<-Lag(y,k=5)
x6<-Lag(y,k=6)
x7<-Lag(y,k=7)
x8<-Lag(y,k=8)
x9<-Lag(y,k=9)
x10<-Lag(y,k=10)
x11<-Lag(y,k=11)
x12<-Lag(y,k=12)
slog<-cbind(y,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11,x12)
slog<-slog[-(1:12),]
inputs<-slog[,2:13]
outputs<-slog[,1]
fit<-elman(inputs[train[106,]],
outputs[train[106,]],
size=c(3,2),
learnFuncParams=c(0.2),
maxit=40)
#plotIterativeError(fit)
y<-as.vector(outputs[-train[106,]])
#plot(y,type="l")
pred<-predict(fit,inputs[-train[106,]])
a<-pred
print (a)
df <- data.frame(a)
Could you tell me please,how is possible to construct data frame and to get the 178 in output as a result,not only single column?
Thanks a lot for your help
Relevant answer
Answer
Try looking on the attached screenshot for a.trick. but you don't have much to predict from..so ??????. David Booth
  • asked a question related to Neural Networks
Question
8 answers
I have implemented a deep forward neural network which is a deep neural network with an activation function. What is the advantage of implementing a Deep normalizing neural network over a Deep feed-forward neural network?
  • asked a question related to Neural Networks
Question
8 answers
In the data preparation phase, we have to divide the dataset into two parts, such as (1) training and (2) testing/validation part. As there is a sufficient amount of resources regarding computational complexity for DL models while doing the training phase. However, I couldn't find any source that discussed complexity while performing testing/validation after finishing the training process.
Is there any good source for that?
Thanks in advance.
Relevant answer
Answer
Yes, your are wright. This is one of the main beauty of DL algorithm with complexity in training data, whilst once it trained efficiently and accurately, the testing/validation phase over new data is much simple and much quicker.
  • asked a question related to Neural Networks
Question
3 answers
Dear collegues,
Could you tell me please, if anybody may to provide,please, a working code for the recurrent neural network for the real data (with using training set) in R?
For example, keras and rnn.
Thanks a lot for your help
  • asked a question related to Neural Networks
Question
4 answers
Can any one provide me with MATLAB code for Particle Swarm Optimization to train ANFIS ?
Relevant answer
  • asked a question related to Neural Networks
Question
6 answers
Does anyone know what's the main reasons for a neural network to classify certain classes and the others not. For example: let say we have a MNIST dataset, my network will classify images from 0 to 4 and the rest classes will be misclassed. On other execution will choise other classes and the rest will be misclassed
Infos: I have balanced datasets. I tried to use regularizations, different learning rates and changed the number of neurons in the hidden layer
My network is Elman network for time-serie classification problems with single hidden layer
Relevant answer
Answer
Dear Dr Mohammed Elmahdi Khennour . Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. See the link: https://wiki.pathmind.com/neural-network
  • asked a question related to Neural Networks
Question
4 answers
One of famous method in Machine Learning is Perceptron Rule. In following Book and it's R Perceptron function some of problems are not working correctly. I develop R code that worked with several examples and exercises.
# R Deep Learning Project, Y. Liu, P. Maldonado
One of simple problem is showing in this picture that separate points in two groups with correctly decision boundary line.
Relevant answer
Answer
I publish my Neural Network new approach codes in githup, as mirkhan64, in reviews. Thanks
  • asked a question related to Neural Networks
Question
4 answers
I am looking into developing methods where I can use the general attention mechanism with traditional methods such an SVM, random forest, etc and not use it with a neural network.
Relevant answer
Answer
The selection of models should be depended on your data. If you have a very strong dataset (i.e. tens of thousand rows), NNs maybe the best option. If your data are not very large but with a clear pattern, structural equation models might be good (simple and clear). If you data are small and messy, random forest could explain something, appeared very useful.
  • asked a question related to Neural Networks
Question
2 answers
In speech based emotion recognition, for individual artists almost get 90 percentage of accuracy using neural network, but accuracy is decreased up to 70 percentage for entire speech corpus.
How can I increase accuracy?
My speech database contain 10 artists.
Relevant answer
Answer
Interesting topic.
  • asked a question related to Neural Networks
Question
3 answers
Hello Fellow Researcher,
I want some guidance to find quality in journals and books to understand recent development in pattern recognition by neural networks from both algorithm and mathematical standpoints.
Thanks
Relevant answer
Answer
Go to ACM computer survey and find out some survey or systematic literature reviews related to your work. You will definitely get some useful insights.
  • asked a question related to Neural Networks
Question
4 answers
I have more than 100 full connected networks, and their edges were undirectional and weighted, how to quantify and classify those networks based on their node and edge attributes? Each network represents one patient, we hope to unsupervised classify those networks.
This question was on graph Isomorphism or graph classification to binary classify the health and disease patients. Our main focus is mainly on edge attributes. Any related survey or algorithm integrating edge attribute would be helpful! Thank you for your help.
Relevant answer
Answer
I guess it´s worthy calculating the eigenvectors for each network, then plot them and see how similar they are.
  • asked a question related to Neural Networks
Question
4 answers
When training a CNN for object detection within images, it is supplied with images containing examples of said object (or list of objects) through their coordinates within the image.
But what happens if you don't supply the coordinates of all the objects in the training images? So there are some object in your training images that match the pattern that the CNN is looking for but are marked as negatives...
I suspect this has a mostly negative impact but, Are there any resources/papers that go over this issue into details?
Thanks
Relevant answer
  • asked a question related to Neural Networks
Question
9 answers
I have a dataset X filled with 4x4 matrices that I am generating through a complex model. However, the values of this dataset are slightly off due to some unknown factor that is being taken into consideration by the software that I am using for the generation. I know this because I have another dataset y filled with 4x4 matrices that contain the actual values. I decided to build a ML model that can predict the right output by using X and y to train the model.
At first, I decided to use the sequential neural network API from Keras to create the model.
X = pandas dataframe
y = pandas dataframe
## Normalize with MinMaxScaler ##
scaler = preprocessing.MinMaxScaler()
names = X.columns
d = scaler.fit_transform(X)
X = pd.DataFrame(d, columns=names)
names = y.columns
d = scaler.fit_transform(y)
y = pd.DataFrame(d, columns=names)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=True)
model = keras.Sequential()
model.add(layers.Dense(12, input_dim=12, activation='linear', name='layer_1'))
model.add(layers.Dense(100, activation='linear', name='layer_2'))
model.add(layers.Dense(150, activation='linear', name='layer_3'))
model.add(layers.Dense(100, activation='linear', name='layer_4'))
model.add(layers.Dense(12, activation='linear', name='output_layer'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs=5, shuffle=True,verbose=2)
In this case, I used neural networks because that is really all I have ever worked with, but I have a feeling as though that might not have been the correct option. What do you guys think?
Relevant answer
Answer
I also agree with Serkan Ballı for suggestions with AUTOML. But it doesn't have all the models you want to try like CatBoost/XGBosot. You can start with autoML. That'll take considerable time and need excellent computational resources. In response, you'll get details of each model and its performance with an exhaustive hyper-parameter search.
  • asked a question related to Neural Networks
Question
1 answer
Unlike an independent variable, a moderator/mediator has not been an input of a neural network. So, how could I deal with a moderator/mediator in a neural network?
is there any academic article relating to this issue? Thank researchers in advance
Relevant answer
Answer
I have not read any article on this issue.
My reply below is based on my experience.
There are several practical ways to do this.
1. Create a dummy variable which then acts as one of the layers. Then compare the results with- vs without dummy variables.
2. Allocate specific probability and/or bias values. Increase or decrease the value and observe the results. Then compare the results with- vs without these values.
3. If data permitted, treat (the introduced) moderator as a target. Then compare the results moderator as a target- vs not target.
Comparing the results will help determine whether the moderator has an important role or not.
Hope this helps.
  • asked a question related to Neural Networks
Question
3 answers
I have created a neural network architecture by considering the data for one study area. Have performed grid-search to tune almost all the hyperparameters of the model. I want to use the same network architecture (without any hyperparameter tuning) on the data of another geographical area for the same topic/ problem.
What I am intended to do is, take the same architecture and train the model with the data of the new area. Later I want to show that even without changing the model architecture the same can be applied to other geographical areas also.
Is this valid and understandable work?
I could not find any resource or paper online that has done a similar thing. All I could find is transfer learning. But I don't think this can be termed as transfer learning, or is it?
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Neural Networks
Question
6 answers
I intend to study to predict stock exchange index, but confused between RBF and MLP, So I want to know by which type is the more suitable for this purpose
Relevant answer
Answer
Usually, they follow a random walk process. Good luck!
  • asked a question related to Neural Networks
Question
11 answers
Describe main technologies in this field of image processing .Please do not mention the technologies which are obsolete incurrent industry practices.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Neural Networks
Question
4 answers
I am studying Neural Network based PID controller.
I saw the papers listed in the question post here.
However, I haven't seen an example of constructing a neural network.
Papers only showed block diagrams of the entire system.
I would like to know how to create a neural network with Simulink.
Do you have any reference and code example?
  • asked a question related to Neural Networks
Question
9 answers
I am need to train RNN neural network with some piece of data which i kept in an xl sheet. how can i feed those data to my network model. How to convert text data into vector form?
  • asked a question related to Neural Networks
Question
9 answers
[Information] Special Issue - Intelligent Control and Robotics
Relevant answer
Answer
Thanks for sharing.
  • asked a question related to Neural Networks
Question
3 answers
I am working on EEG prediction classification task. I apply segmentation sliding window techniques and I use CNN for prediction and classification task, most of the researches in this task used CNN but I don't know why CNN is better than RNN ?
Relevant answer
Answer
CNN is thought to be more powerful than RNN. When compared to CNN, RNN has less feature compatibility. This CNN accepts fixed-size inputs and produces fixed-size outputs. RNN can handle input/output lengths of any length.
Kind Regards
Qamar Ul Islam
  • asked a question related to Neural Networks
Question
4 answers
How to classify large binary image dataset using Neural Network or Convolutional Neural Network in MATLAB?
What configuration need to be done (layers configuration)?
or
Setting / Preprocessing of data
how to configure:
[ _,_, 3] to [ _, _,1] image from RGB to Binary only in MATLAB tool
Relevant answer
Answer
Convert from image space to feature space by encoding what property of the binary image is useful for classification, and feed the feature vector as an input to the Neural Network e.g. area, number, location of blobs
  • asked a question related to Neural Networks
Question
8 answers
I'm training a DNN network. I have 20 inputs and 3 outputs.
And when my training is done I want to know what is the best input for specific outputs.
One way I can think about is to use this trained DNN as a cost function and have populations of inputs.
But it is time-consuming. Is there any other way to do that? please, give me advices even if it is time-consuming. I'd like to know different approaches to do that.
Thank you
Relevant answer
Answer
You need the model input and output bias with the weights to develop mathematical expressions.
How to extract the weight and bias for the input and output layer using (MATLAB Training)
w1=net.IW(1,1);
w2=net.LW(2,1);
b1=net.b(1);
b2=net.b(2);
run that where the net is the model.
Equation development procedure
x1=w21(tanh(x1w11+x2w12+.....+b11)
X2= w22(tanh(x1w21+x2w22+.....+b21)
X3= w23(tanh(x1w31+x2w32+.....+b31)
output= tanh(E(Xi)+bo)(Then denormalize)
  • asked a question related to Neural Networks
Question
13 answers
I want to use a neural network classifier to separate patients and healthy persons by some parameters. and then test validation of the classifier. Is svm a proper tool? And how can I test the validation? Which Syntaxs should I use?
Relevant answer
Answer
There are 22 algorithm of Matlab when you decide to analyze your data, from all Tree training on Classification window. Then, you can choose best of Classifier for your data.
  • asked a question related to Neural Networks
Question
8 answers
In neural networks, I have understood that the activation function at the Hidden Layer make the inputs in specific range like (0, 1) or (-1, 1), and do solve the nonlinear problems, But what does it do in the Output layer, Can get simply explanation, because I'm not specialist
Relevant answer
  • asked a question related to Neural Networks
Question
4 answers
I'm Finance and Banking student, have research about predicting the stock price using neural networks, tried to learn Matlab software, but had difficulty with the use it because it contains many features like NAR, NARX, which I didn't understand, So I want to guide me to the best software among these above, and what the differences between them
Relevant answer
Answer
The SPSS and R are both better for statistical data analysis, whilst MATLAB do almost every thing. Once you get good base/concept of MATLAB, you may be able to write your codes for almost every thing. MATLAB is most scientific, much better for Mathematics, Statistics, all research/application areas of Artificial Intelligence Engineering and medical. In one sentence::: MATLAB is ALL in ALL, very simple and easy to understand, however., it may need good base of basic intermediate Mathematics. I strongly recommend MATLAB. You can do very good object oriented programming or codes related to finance or other social sciences as well.
  • asked a question related to Neural Networks
Question
7 answers
Hi,
In the github and other online platforms, only image data based adversarial attacks are available (Popular libraries are - ART, Foolbox, CLeverhans etc). But I could not find any samples or examples regarding adversarial attack on Neural Networks for tabular data.
Can anyone suggest me some useful resources of adversarial attacks on Neural Network for tabular data?
Thanks in advance.
Relevant answer