Discover the world's scientific knowledge
With 160+ million publication pages, 25+ million researchers and 1+ million questions, this is where everyone can access science
You can use AND, OR, NOT, "" and () to specify your search.
I am just wondering that in the neural network if the activation function is Sigmoid function and if we use python to do it. Is the function output value greater than 0.5, the neuron will be activated?
They say there are a lot of different ways. The weight contribution of each instance to the loss of value with the inverse frequency class. That is, each instance of the smaller class contributes more where the larger class instances pay less tribute to the final loss.
I am using UCInet to get some SNA metrics for my social networks. E-I index is useful to determine whether individuals interact more with those of the same or a different class/category (e.g. sex, age, social rank). UCInet runs a permutation test to calculate the p-value of the E-I index of the whole network. I was wondering if there is a way to do the same but for the E-I index of each class (e.g. E-I index for males and females separately) using UCInet or another software (maybe, R?).
Also, I would like to know if there is a way to compute E-I index taking into account the strength of ties (edges' weights). I have been doing some calculations manually but it would be helpful to see if I can compute it somehow. I have not found any information on the topic so far.
I have created my synthetic datasets from mixture of Gaussians with k number of components, and 2 dimensions. The data are in the range of 1-100. Now I feed it into autoencoder neural network having 2 neurons in input layer, 7 neurons in hidden layer and 2 neurons in output layer. I expect to have output of output layer neuron to be same as input value. But it is not. While training, I used sigmoid function as activation function in hidden layer, and also as output function in output layer. I am comparing this output value with input during training. Is it good ? Or the output function should be different one ?
I’m working on the classification of real medical images but I faced to a big problem about TP, TN, FP and FN (I used different neural networks and I want to compare them together). I tried to calculate these values for each classes separately but for some classes that not exit in the test data, TP and also FP are 0 so for these classes the sensitivity becomes Nan and its not appropriate for determining whether this algorithm is good or not. To be more precisely I provide one iteration's results.
Is it reasonable to ignore TN vales?
Positive Predictive Value= (TP)/(TP+FP)
Negative Predictive Value= (TN)/(TN+FN)
Attached please find one iteration's values for these criteria.
Thank you very much.
I am beginning my studies with the goal of applying neural networks to detect anomalies in network packets. My question is in which models I can start my research and which metrics I should consider in order to minimize the occurrence of false negatives. Since false negatives indicate that a threat was not identified by the network, I was considering using the recall value.
I have a two-stage problem that I am trying to solve through ANFIS in MATLAB.
1) I am dealing with 5 classes with 16 attributes and 2000 instances. Each output is basically a categorical value. I converted them as a numerical value. More specifically I am working on activity recognition- walk, sit, jump, run, climb and reclassify them 1, 2,3, 4,5 (since ANFIS does not take categorical output). Now the question is, is it ok what I am doing since the gap between class 1 and class 2 is same as that of class 3 and 4 which is not realistic as in the real world these classes are all categorical and an interval measure doesnot make any sense.
2) I am trying to initialize a FIS using genfis1 and also by using the GUI of ANFIS but it is taking too long to respond and it is nearly 2 hours and my system is still processing to generate the initial FIS using grid partitioning. Is this normal for 500 instances (rows) and 16 attributes (15 input, 1 output)- First I tried this with 2000 rows but it was taking too long so I cut down my training size and now it is only 500 but still it has not finished yet (more than 2 hrs- still MATLAB is processing). I am not sure if the problem will converge in the end or not.
Any suggestions or feedback would be really helpful.
Motivated by the "Efficient Backprop" paper by yann Lecun,
Back propagation works better when mean value of inputs is approximately zero. Since, activations of one layer are inputs to next layer, we require activations of all hidden layers to be approximately zero to make back propagation work better. But, how to initialize weights such that mean activations of hidden layers becomes approximately zero? I am using 1.7159*tanh(2x/3) activaion function.
You can find the paper attached here with.
I'm trying to change the buffer size and time-out for an AODV routing protocol in NS2. I have tried changing the IFQ value in my Tcl file but it seems to have no influence on the buffer which is a 64 packet. I have also tried Queue set Limit_ 1000 in my Tcl script which also did nothing. Can anyone please help me with this issue?