University of Technology, Iraq
Question
Asked 3 April 2019
Neural Network for regression with two output vectors?
I have successfully implemented ANN architecture for one output problem. Now I would like to add more than one output vectors in the model, meaning [X, Y] = f[A, B, C ....]. I have been using R to do this but struggling with multi-output problems. My specific questions are:
How do I add more than one output in the code?
Does the activation function (using sigmoid) have a relation with the number of vector outputs in the model?
Any help would be appreciated.
Thanks!
Most recent answer
Hi I made this code for my research I hope it will help you
I use 8 hidden layer and 2 output
P=[input]
T=[output1 ; output 2]
net=newff(p,t,[8 2],{'tansig','purelin'},'trainlm');
learparam.Ir=.75;
net.trainParam.mc=.9;
net.trainParam.show = 5;
net.trainParam.epochs = 1200;
net.trainParam.goal = 1e-4;
[net,tr]=train(net,p,t);
plot(tr.epoch, tr.perf, tr.epoch, tr.vperf, tr.epoch, tr.tperf);
legend('Training', 'Validation', 'Test', -1);
ylabel('Squared Error'); xlabel('Epoch');
a=sim(net,p)
weights1=net.iw{1,1}
b=net.b{1}
weights2=net.lw{2,1}
All Answers (5)
Ecole Nationale Polytechnique de Constantine
Hi,
Use this code by specifying your inputs and outputs first using load '' command .
They must be line vectors.
x='input';
y='output';
p=x;
t=y;
net2=newff(p,t,15,{'tansig','purelin'},'trainlm');
net2.trainParam.epochs=500;
net2=init(net2);
net2.trainparam.lr=0.25;
net2.divideParam.trainRatio=.7;
net2.divideParam.valRatio=.30;
net2.divideParam.testRatio=0;
net2 = train(net2,p,t);
train = sim(net2,p);
test = sim(net2,testing_data_input');
Keep me informed of your progress to see how I can help you.
Best regards.
2 Recommendations
Al Qadissiya University
Read this research I think contains all your questions in this direction ((Multiple-input multiple-output vs. single-input single-output neural network forecasting”))
1 Recommendation
University of São Paulo
The number of outputs of a neural network depend on the architecture (the number of nodes and layers of nodes connecting the input to the output) you choose and is not (necessarily) related to the activation function (although the activation function is important for the performance of the network).
Usually, when we want more than a single numeric output we add more nodes (and possibly more layers of nodes) and bundle its outputs together and then reshape it according to their meaning, so they form a vector, a matrix, a set of vectors, a tensor of higher rank and so on.
Now, I don't know if R does proper machine learning. To train a neural network with more than one note we have to resort to backpropagation (which I believe to be your main problem in this question), which can be fairly annoying (and sometimes difficult) to implement by hand for a given architecture. That said, there are plenty of programming packages that implement efficient backpropagation for the architecture you choose, the most popular being Google's TensorFlow, for which there is, probably, a R bindings package.
ICAR-Central Coastal Agricultural Research Institute
It can be done using 'neuralnet' package. You can visit the following link for such example.
Here is a simple example code for classification problem with IRIS dataset
library(neuralnet)
library(nnet) #This is for 'class.ind' function
data("iris")
trainData <- cbind(iris[, 1:4], class.ind(iris$Species))
iris_net <- neuralnet(setosa + versicolor + virginica ~
Sepal.Length + Sepal.Width + Petal.Length + Petal.Width,
trainData, hidden=3, act.fct="tanh")
plot(iris_net)
#For regression problem using 'Boston' dataset, use linear.output=T for regression
library(MASS)
data(Boston)
head(Boston,2)
boston_net <- neuralnet(lstat + medv ~ crim + zn + indus + chas + nox + rm + age + dis + rad + tax + ptratio + black, Boston, hidden=3, act.fct="tanh", linear.output=T, algorithm = "rprop+")
plot(boston_net)
2 Recommendations
University of Technology, Iraq
Hi I made this code for my research I hope it will help you
I use 8 hidden layer and 2 output
P=[input]
T=[output1 ; output 2]
net=newff(p,t,[8 2],{'tansig','purelin'},'trainlm');
learparam.Ir=.75;
net.trainParam.mc=.9;
net.trainParam.show = 5;
net.trainParam.epochs = 1200;
net.trainParam.goal = 1e-4;
[net,tr]=train(net,p,t);
plot(tr.epoch, tr.perf, tr.epoch, tr.vperf, tr.epoch, tr.tperf);
legend('Training', 'Validation', 'Test', -1);
ylabel('Squared Error'); xlabel('Epoch');
a=sim(net,p)
weights1=net.iw{1,1}
b=net.b{1}
weights2=net.lw{2,1}
Similar questions and discussions
Related Publications
We analyze function approximation (regression) capability of Fuzzy ARTMAP (FAM) architectures -well-known incremental learning neural networks. We focus especially on the universal approximation property. In our experiments, we compare the regression performance of FAM networks with other standard neu-ral models. It is the first time that ARTMAP re...
In this article a neural-network regression model for prediction of the bacterioplankton abundance according to physicochemical parameters of the environmental conditions is considered and some of the peculiarities of its development are described. A particular case of small and very heterogeneous data sample, typical for biological applications, i...
An example of an activation function $\sigma$ is given such that networks with activations $\{\sigma, \lfloor\cdot\rfloor\}$, integer weights and a fixed architecture depending on $d$ approximate continuous functions on $[0,1]^d$. The range of integer weights required for $\varepsilon$-approximation of H\"older continuous functions is derived, whic...