2nd Dec, 2019

National University of Computer and Emerging Sciences

Question

Asked 6th Feb, 2014

Complexity of an algorithm is mostly represented in Big O notations that plays an important role in finding efficient algorithm. What is the time complexity of MLP and other ANN?

**Get help with your research**

Join ResearchGate to ask questions, get input, and advance your work.

Dear Ahmad, please guide me to use this , Please need some clarity

1 Recommendation

This depends on the architecture of the network. For a trained MLP the complexity of classification (the forward propagation) is roughly:

1. the number of multiplications needed to compute the activation of all neurons (vector product) in the i-th layer of the net equals: NumberOfNodesInLayer(i)*NumberOfNodesInLayer(i-1)

Since the separation of concave regions is possible with 3 layers, lets assume we got 3 at most. Given that the last layer only contains 1 neuron which is used to output the classification of the input, we got 2 layers to distribute our remaining n-1 neurons to. The worst case (regarding complexity) would be that the nodes are equally distributed, which results in ((n-1)/2)^2 \in O(n^2).

2. since output functions of neurons are in general very simple to compute i would assume those costs to be constant per neuron. Given a network with n neurons, this step would be in O(n).

Given the fact, that the number of neurons n for a given problem can be regarded as a constant, the overall complexity of O(n^2) equals O(1).

9 Recommendations

This depends on the architecture of the network. For a trained MLP the complexity of classification (the forward propagation) is roughly:

1. the number of multiplications needed to compute the activation of all neurons (vector product) in the i-th layer of the net equals: NumberOfNodesInLayer(i)*NumberOfNodesInLayer(i-1)

Since the separation of concave regions is possible with 3 layers, lets assume we got 3 at most. Given that the last layer only contains 1 neuron which is used to output the classification of the input, we got 2 layers to distribute our remaining n-1 neurons to. The worst case (regarding complexity) would be that the nodes are equally distributed, which results in ((n-1)/2)^2 \in O(n^2).

2. since output functions of neurons are in general very simple to compute i would assume those costs to be constant per neuron. Given a network with n neurons, this step would be in O(n).

Given the fact, that the number of neurons n for a given problem can be regarded as a constant, the overall complexity of O(n^2) equals O(1).

9 Recommendations

^ Ferreira, C. (2006). "Designing Neural Networks Using Gene Expression Programming". In A. Abraham, B. de Baets, M. Köppen, and B. Nickolay, eds., Applied Soft Computing Technologies: The Challenge of Complexity, pages 517–536, Springer-Verlag.

1 Recommendation

O(2^n) to converge to optimal solution.

O(#epochs * # examples * #features * #neurons) => approximate solution

2 Recommendations

Presentation

Full-text available

- Oct 2020

Invited Talk

Chapter

- Jan 2020

Being able to predict the daily activity of firefighters is of great interest to optimize human and material resources. It will allow to enable a quicker response by achieving a better geographical deployment of these resources according to the expected number of interventions. Having obtained the list of interventions for the period 2012–2017 in t...

Conference Paper

- Nov 1998

This paper offers an experimental study of the influence of
learning on evolution in populations of neural networks in which
evolutionary and learning fitness surfaces are set and known in advance.
Although not biologically plausible, this allows us to investigate
various hypotheses regarding the interaction between evolution and
learning in neural...

Get high-quality answers from experts.