Sensitivity analysis helps decision-makers to understand how a given model output responds when there is variation in the model inputs. One of the most authoritative measures in global sensitivity analysis is the Sobol' total-order index ($T_i$), which can be computed with several different estimators. Although previous comparisons exist, it is hard to know which estimator performs best since the results are contingent on several benchmark settings: the sampling method ($\tau$), the distribution of the model inputs ($\phi$), the number of model runs ($N_t$), the test function or model ($\varepsilon$) and its dimensionality ($k$), the weight of higher order effects (e.g. second, third, $k_2,k_3$), or the performance measure selected ($\delta$). Here we break these limitations and simultaneously assess all total-order estimators in an eight-dimension hypercube where $(\tau, \phi, N_t, \varepsilon, k, k_2, k_3, \delta)$ are treated as random parameters. This design allows to create an unprecedentedly large range of benchmark scenarios. Our results indicate that, in general, the preferred estimator should be Razavi and Gupta's, followed by that of Jansen, or Janon/Monod. The remainder lag significantly behind in performance. Our work helps analysts navigate the myriad of total-order formulae by effectively eliminating the uncertainty in the selection of the best estimator.