Canada Meng Tang’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 3. Training progress of ADM and gradient descend (GD) on Deeplab-MSc-largeFOV. Our ADM for the grid CRF loss with α-expansion significantly improves convergence and achieves lower training loss. For example, first 1,000 iterations of ADM give grid CRF loss lower than GD's best result.
Beyond Gradient Descent for Regularized Segmentation Losses
  • Conference Paper
  • Full-text available

April 2019

·

413 Reads

·

37 Citations

·

Canada Meng Tang

·

·

[...]

·

Yuri Boykov

The simplicity of gradient descent (GD) made it the default method for training ever-deeper and complex neural networks. Both loss functions and architectures are often explicitly tuned to be amenable to this basic local optimization. In the context of weakly-supervised CNN segmentation, we demonstrate a well-motivated loss function where an alternative optimizer (ADM) achieves the state-of-the-art while GD performs poorly. Interestingly, GD obtains its best result for a "smoother" tuning of the loss function. The results are consistent across different network architectures. Our loss is motivated by well-understood MRF/CRF regularization models in "shallow" segmentation and their known global solvers. Our work suggests that network design/training should pay more attention to optimization methods.

Download

Citations (1)


... Yang et al. [23] presented a method comprising of a graph-model-based scheme, i.e., graph cuts [2] and a noisy learning paradigm (abbreviated to GMBM-DLM) for weakly-supervised instrument segmentation. Some studies [18,14] investigated penalization terms to regularize training. ...

Reference:

A Bayesian Approach to Weakly-supervised Laparoscopic Image Segmentation
Beyond Gradient Descent for Regularized Segmentation Losses