Toan Minh Tran

Toan Minh Tran
University of Adelaide · School of Computer Science

PhD University of Adelaide

About

8
Publications
3,617
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
329
Citations
Citations since 2017
5 Research Items
327 Citations
2017201820192020202120222023020406080
2017201820192020202120222023020406080
2017201820192020202120222023020406080
2017201820192020202120222023020406080

Publications

Publications (8)
Conference Paper
Full-text available
Deep learning models have demonstrated outstanding performance in several problems, but their training process tends to require immense amounts of computational and human resources for training and labeling, constraining the types of problems that can be tackled. Therefore, the design of effective training methods that require small labeled trainin...
Preprint
Full-text available
Deep learning models have demonstrated outstanding performance in several problems, but their training process tends to require immense amounts of computational and human resources for training and labeling, constraining the types of problems that can be tackled. Therefore, the design of effective training methods that require small labeled trainin...
Conference Paper
Full-text available
We propose a method that substantially improves the efficiency of deep distance metric learning based on the optimization of the triplet loss function. One epoch of such training process based on a naive optimization of the triplet loss function has a run-time complexity O(N^3), where N is the number of training samples. Such optimization scales po...
Preprint
Full-text available
We propose a method that substantially improves the efficiency of deep distance metric learning based on the optimization of the triplet loss function. One epoch of such training process based on a naive optimization of the triplet loss function has a run-time complexity O(N^3), where N is the number of training samples. Such optimization scales po...
Article
Full-text available
Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annota...
Conference Paper
Combining outputs from different classifiers to achieve high accuracy in classification task is one of the most active research areas in ensemble method. Although many state-of-art approaches have been introduced, no one method performs the best on all data sources. With the aim of introducing an effective classification model, we propose a Gaussia...
Conference Paper
Combining multiple classifiers to achieve better performance than any single classifier is one of the most important research areas in machine learning. In this paper, we focus on combining different classifiers to form an effective ensemble system. By introducing a novel framework operated on outputs of different classifiers, our aim is to build a...
Conference Paper
The paper introduces a novel 2-Stage model for multi-classifier system. Instead of gathering posterior probabilities resulted from base classifiers into a single dataset called meta-data or Level1 data like in the original 2-Stage model, here we separate data in K Level1 matrices corresponding to the K base classifiers. These data matrices, in turn...

Questions

Question (1)
Question
I'm testing the performance of the "dropout" technique introduced by Gal & Ghahramani for approximate inference, but it is not working very well. So I'm wondering if there are some other algorithms that can be used to estimate the posterior distribution of the parameters in the Bayesian CNN model.

Network

Cited By