
Seyedsaman EmamiAutonomous University of Madrid | UAM · Department of Computer Engineering
Seyedsaman Emami
Doctor of Philosophy
Teaching, researching, and seeking a postdoctoral opportunity to further contribute to the field of Machine learning.
About
8
Publications
693
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
33
Citations
Introduction
My research has focused on training procedures of various algorithms including deep networks. I bring strong expertise in Gradient Boosting and ensemble learning, and I am eager to collaborate with researchers and contribute to ongoing projects. Please explore my ResearchGate profile for a detailed overview of my academic journey and research contributions. I look forward to discussing potential collaborations and the opportunity to contribute to cutting-edge research.
Education
June 2019 - September 2024
Universidad Autónoma de Madrid
Field of study
- Machine learning
Publications
Publications (8)
This paper presents a computationally efficient variant of Gradient Boosting (GB) for multi-class classification and multi-output regression tasks. Standard GB uses a 1-vs-all strategy for classification tasks with more than two classes. This strategy entails that one tree per class and iteration has to be trained. In this work, we propose the use...
This paper presents a novel methodology to address multi-output regression problems through the incorporation of deep-neural networks and gradient boosting. The proposed approach involves the use of dense layers as additive models within the Gradient Boosting framework using an auto transfer learning technique. At each boosting iteration, the deep...
Gradient Boosting Machines (GBMs) have revealed outstanding proficiency in various machine learning applications, such as classification and regression. Gradient boosting builds a set of regression models in an iterative process, in which at each iteration, a regressor model is trained to reduce a given loss on a given objective. This paper propose...
Deep learning has revolutionized the computer vision and image classification domains. In this context Convolutional Neural Networks (CNNs) based architectures are the most widely applied models. In this article, we introduced two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB), n...
This paper presents a novel technique based on gradient boosting to train the final layers of a neural network (NN). Gradient boosting is an additive expansion algorithm in which a series of models are trained sequentially to approximate a given function. A neural network can also be seen as an additive expansion where the scalar product of the res...
Deep learning has revolutionized computer vision and image classification domains. In this context Convolutional Neural Networks (CNNs) based architectures and Deep Neural Networks (DNNs) are the most widely applied models. In this article, we introduced two procedures for training CNNs and DNNs based on Gradient Boosting (GB), namely GB-CNN and GB...
This paper presents a computationally efficient variant of gradient boosting for multi-class classification and multi-output regression tasks. Standard gradient boosting uses a 1-vs-all strategy for classifications tasks with more than two classes. This strategy translates in that one tree per class and iteration has to be trained. In this work, we...