A preview of the PDF is not available
Iterative Stochastic Quasigradient procedures for robust estimation, machine learning and decision making problems
The talk illustrates the importance of new type non-smooth stochastic optimization and stochastic quasigradient (SQG) procedures for robust of-line and on-line decisions involving large-scale Machine Learning, Distributed Models Linkage, and robust decision-making problems. Advanced robust statistical analysis and machine learning models based on in general nonstationary stochastic optimization allow to account for potential distributional shifts, heavy tails, and nonstationarities in data streams that can mislead traditional statistical and machine learning models, in particular, deep neural learning or deep artificial neural network (ANN). Proposed models and methods rely on probabilistic and non-probabilistic (explicitly given or simulated) distributions combining measures of chances, experts’ beliefs and similarity measures (for example, compressed form of the kernel estimators). This is vitally important for integrated sustainable developments modeling. For highly nonconvex models such as the deep ANN network, the SQGs allow to avoid local solutions. In cases of nonstationary data, the SQGs allow for sequential revisions and adaptation of parameters to the changing environment, possibly, based on of-line adaptive simulations. The outlined non-smooth STO approaches and SQG-based procedures are illustrated with examples of robust estimation, machine learning, adaptive Monte Carlo optimization for preventive-adaptive cat risks (floods, epidemics) modeling and management.