¿Cuál es la complejidad del tiempo para entrenar una red neuronal usando la retropropagación?

17

Supongamos que un NN contiene nn capas ocultas, mm ejemplos de entrenamiento, xx características yn ini nodos en cada capa. ¿Cuál es la complejidad del tiempo para entrenar a este NN utilizando la propagación inversa?

Tengo una idea básica sobre cómo encuentran la complejidad temporal de los algoritmos, pero aquí hay 4 factores diferentes a considerar aquí, es decir, iteraciones, capas, nodos en cada capa, ejemplos de entrenamiento y quizás más factores. Encontré una respuesta aquí, pero no estaba lo suficientemente clara.

¿Existen otros factores, además de los que mencioné anteriormente, que influyen en la complejidad temporal del algoritmo de entrenamiento de un NN?

DuttaA
fuente
Ver también https://qr.ae/TWttzq .
nbro

Respuestas:

11

No he visto una respuesta de una fuente confiable, pero trataré de responderla yo mismo, con un ejemplo simple (con mi conocimiento actual).

En general, tenga en cuenta que el entrenamiento de un MLP utilizando la propagación inversa generalmente se implementa con matrices.

Complejidad temporal de la multiplicación de matrices

La complejidad temporal de la multiplicación de matrices para M i jM j kMijMjk es simplemente O ( i j k )O(ijk) .

Tenga en cuenta que aquí estamos asumiendo el algoritmo de multiplicación más simple: existen algunos otros algoritmos con una complejidad temporal algo mejor.

Algoritmo de avance

El algoritmo de propagación de avance es el siguiente.

Primero, para pasar de la capa ii a la jj , debes

S j = W j iZ i

Sj=WjiZi

Luego aplicas la función de activación

Z j = f ( S j )

Zj=f(Sj)

Si tenemos NN capas (incluidas las capas de entrada y salida), esto se ejecutará N - 1N1 veces.

Ejemplo

Como ejemplo, calculemos la complejidad del tiempo para el algoritmo de avance de un MLP con 44 capas, donde ii denota el número de nodos de la capa de entrada, jj el número de nodos en la segunda capa, kk el número de nodos en el tercera capa yll el número de nodos en la capa de salida.

Como hay 44 capas, necesita 33 matrices para representar los pesos entre estas capas. Denotémoslos por W j iWji , W k jWkj y W l kWlk , donde W j iWji es una matriz con jj filas y ii columnas ( W j iWji contiene los pesos que van de la capaiia la capajj ).

Supongamos que tienes tt ejemplos de entrenamiento. Para propagar de la capa ii a jj , tenemos primero

S j t =WjiZit

Sjt=WjiZit

and this operation (i.e. matrix multiplcation) has O(jit)O(jit) time complexity. Then we apply the activation function

Zjt=f(Sjt)

Zjt=f(Sjt)

and this has O(jt)O(jt) time complexity, because it is an element-wise operation.

So, in total, we have

O(jit+jt)=O(jt(t+1))=O(jit)

O(jit+jt)=O(jt(t+1))=O(jit)

Using same logic, for going jkjk, we have O(kjt)O(kjt), and, for klkl, we have O(lkt)O(lkt).

In total, the time complexity for feedforward propagation will be

O(jit+kjt+lkt)=O(t(ij+jk+kl))

O(jit+kjt+lkt)=O(t(ij+jk+kl))

I'm not sure if this can be simplified further or not. Maybe it's just O(tijkl)O(tijkl), but I'm not sure.

Back-propagation algorithm

The back-propagation algorithm proceeds as follows. Starting from the output layer lklk, we compute the error signal, EltElt, a matrix containing the error signals for nodes at layer ll

Elt=f(Slt)(ZltOlt)

Elt=f(Slt)(ZltOlt)

where means element-wise multiplication. Note that EltElt has ll rows and tt columns: it simply means each column is the error signal for training example tt.

We then compute the "delta weights", DlkRl×kDlkRl×k (between layer ll and layer kk)

Dlk=EltZtk

Dlk=EltZtk

where ZtkZtk is the transpose of ZktZkt.

We then adjust the weights

Wlk=WlkDlk

Wlk=WlkDlk

For lklk, we thus have the time complexity O(lt+lt+ltk+lk)=O(ltk)O(lt+lt+ltk+lk)=O(ltk).

Now, going back from kjkj. We first have

Ekt=f(Skt)(WklElt)

Ekt=f(Skt)(WklElt)

Then

Dkj=EktZtj

Dkj=EktZtj

And then

Wkj=WkjDkj

Wkj=WkjDkj

where WklWkl is the transpose of WlkWlk. For kjkj, we have the time complexity O(kt+klt+ktj+kj)=O(kt(l+j))O(kt+klt+ktj+kj)=O(kt(l+j)).

And finally, for jiji, we have O(jt(k+i))O(jt(k+i)). In total, we have

O(ltk+tk(l+j)+tj(k+i))=O(t(lk+kj+ji))

O(ltk+tk(l+j)+tj(k+i))=O(t(lk+kj+ji))

which is same as feedforward pass algorithm. Since they are same, the total time complexity for one epoch will be O(t(ij+jk+kl)).

O(t(ij+jk+kl)).

This time complexity is then multiplied by number of iterations (epochs). So, we have O(nt(ij+jk+kl)),

O(nt(ij+jk+kl)),
where nn is number of iterations.

Notes

Note that these matrix operations can greatly be paralelized by GPUs.

Conclusion

We tried to find the time complexity for training a neural network that has 4 layers with respectively ii, jj, kk and ll nodes, with tt training examples and nn epochs. The result was O(nt(ij+jk+kl))O(nt(ij+jk+kl)).

We assumed the simplest form of matrix multiplication that has cubic time complexity. We used batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch)

Also, if you use momentum optimization, you will have same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm.

I'm not sure what the results would be using other optimizers such as RMSprop.

Sources

The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this.

If you're not familiar with back-propagation, check this article:

http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4

M.kazem Akhgary
fuente
Your answer is great..I could not find any ambiguity till now, but you forgot the no. of iterations part, just add it...and if no one answers in 5 days i'll surely accept your answer
DuttaA
@DuttaA I tried to put every thing I knew. it may not be 100% correct so feel free to leave this unaccepted :) I'm also waiting for other answers to see what other points I missed.
M.kazem Akhgary
4

For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have O(w)O(w) where ww is the number of weights, i.e., nninni, assuming full connectivity between your layers.

The back-propagation has the same complexity as the forward evaluation (just look at the formula).

So, the complexity for learning mm examples, where each gets repeated ee times, is O(wme)O(wme).

The bad news is that there's no formula telling you what number of epochs ee you need.

maaartinus
fuente
From the above answer don't you think itdepends on more factors?
DuttaA
1
@DuttaA No. There's a constant amount of work per weight, which gets repeated e times for each of m examples. I didn't bother to compute the number of weights, I guess, that's the difference.
maaartinus
1
I think the answers are same. in my answer I can assume number of weights w = ij + jk + kl. basically sum of n * n_i between layers as you noted.
M.kazem Akhgary
1

A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum.

This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions.

An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar.

Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper:

Lister, R and Stone J "An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995.

Summarised from my book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning.

James V Stone
fuente
Hi J. Stone. Thanks for trying to contribute to the site. However, please, note that this is not a place for advertising yourself. Anyway, you can surely provide a link to your own books if they are useful for answering the questions and provided you're not just trying to advertise yourself.
nbro
@nbro If James Stone can provide an insightful answer - and it seems so - then i'm fine with him also mentioning some of his work. Having experts on this network is a solid contribution to the quality and level.
javadba
Dear nbro, That is a fair comment. I dislike adverts too. But it is possible for a book and/or paper to be relevant to a question, as I believe it is in this case. regards, Jim Stone
James V Stone