Figure - available via license: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Content may be subject to copyright.
Source publication
Deep learning has revolutionized computer vision and image classification domains. In this context Convolutional Neural Networks (CNNs) based architectures and Deep Neural Networks (DNNs) are the most widely applied models. In this article, we introduced two procedures for training CNNs and DNNs based on Gradient Boosting (GB), namely GB-CNN and GB...
Context in source publication
Citations
... Convolutional neural networks (CNN) with various kernel sizes are used in the PIPE-CovNet. However, in a study conducted in 2023 [11], attempts were made to increase the effectiveness of classification by using a gradient-boosting approach for training convolutional and deep neural networks. Gradient Boosting Machines (GBMs) have been demonstrated to be more accurate and need less training time than deep learning for binary classification tasks in research from 2022 [12]. ...
... Up until ResNet developed [18], extending the depth of a CNN beyond a certain extent could be counterproductive. However, as demonstrated in [11], stacking multiple dense layers that have been trained using gradient boosting methods could result in better performance. In the PIPE-CovNet+ model, we used a gradient boosting approach to train many dense layers, where the dense layers were considered as discrete predictors that learn to rectify errors caused by prior dense layers rather than as a single deep neural network. ...
... In essence, each frozen dense layer can retain the knowledge acquired after each training iteration and is added to the model to enhance the performance of the model made up of earlier dense layers. In addition, freezing dense layers reduces overfitting [11]. Figure 2 illustrates the gradient boosting based training process of the PIPE-CovNet+ model. ...