Fig 1 - available via license: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Content may be subject to copyright.

A snippet of the iterative model training procedure. Light-gray indicates that the layer is frozen, dark gray that the layer is being trained and white indicates output layer (also being trained)
Source publication
Deep learning has revolutionized the computer vision and image classification domains. In this context Convolutional Neural Networks (CNNs) based architectures are the most widely applied models. In this article, we introduced two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB), n...
Contexts in source publication
Context 1
... dense layers are added and trained iteratively while freezing previous dense layers. This model is compared to the same architecture trained jointly as a standard CNN model. The second proposed architecture considers only the iterative training of a network composed only of dense layers. This process is illustrated in Fig. 1 in which only the dense layers are shown (Note, that the figure is not representing the actual size of the used dense layers). In the first, iteration, one single dense (and output) layer is trained (dark grey units in left diagram of Fig. 1). This layer is the first fully connected layer. After fitting this model, the model is copied ...
Context 2
... only the iterative training of a network composed only of dense layers. This process is illustrated in Fig. 1 in which only the dense layers are shown (Note, that the figure is not representing the actual size of the used dense layers). In the first, iteration, one single dense (and output) layer is trained (dark grey units in left diagram of Fig. 1). This layer is the first fully connected layer. After fitting this model, the model is copied and Algorithm 1 Training procedure of GB-CNN. a second dense layer is added (iteration 1 and second diagram in Fig. 1) freezing the weights of the first dense layer froze (shown in the Fig. 1 with light gray neurons). This second step ...
Context 3
... of the used dense layers). In the first, iteration, one single dense (and output) layer is trained (dark grey units in left diagram of Fig. 1). This layer is the first fully connected layer. After fitting this model, the model is copied and Algorithm 1 Training procedure of GB-CNN. a second dense layer is added (iteration 1 and second diagram in Fig. 1) freezing the weights of the first dense layer froze (shown in the Fig. 1 with light gray neurons). This second step fine-tunes the parameters of the convolutional layers (if present), skips the training of the previous dense layer, and trains the newly added dense layer (dark gray units). Each new model, fits the last dense layers and ...
Context 4
... output) layer is trained (dark grey units in left diagram of Fig. 1). This layer is the first fully connected layer. After fitting this model, the model is copied and Algorithm 1 Training procedure of GB-CNN. a second dense layer is added (iteration 1 and second diagram in Fig. 1) freezing the weights of the first dense layer froze (shown in the Fig. 1 with light gray neurons). This second step fine-tunes the parameters of the convolutional layers (if present), skips the training of the previous dense layer, and trains the newly added dense layer (dark gray units). Each new model, fits the last dense layers and the convolutional blocks to the corresponding pseudo-residuals. The training procedure continues ...
Similar publications
A smart hybrid energy system (SHES) is presented using a combination of battery, PV systems, and gas/diesel engines. The economic/environmental dispatch optimization algorithm (EEDOA) is employed to minimize the total operating cost or total CO2 emission. In the face of the uncertainty of renewable power generation, the constraints for loss-of-load...