Xinmin Yang’s research while affiliated with Chongqing Normal University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (30)


A branch and bound algorithm for continuous multiobjective optimization problems using general ordering cones
  • Article

May 2025

·

1 Read

European Journal of Operational Research

Weitian Wu

·

Xinmin Yang

A Parallel Hybrid Action Space Reinforcement Learning Model for Real-world Adaptive Traffic Signal Control

March 2025

·

5 Reads

Yuxuan Wang

·

Meng Long

·

Qiang Wu

·

[...]

·

Xinmin Yang

Adaptive traffic signal control (ATSC) can effectively reduce vehicle travel times by dynamically adjusting signal timings but poses a critical challenge in real-world scenarios due to the complexity of real-time decision-making in dynamic and uncertain traffic conditions. The burgeoning field of intelligent transportation systems, bolstered by artificial intelligence techniques and extensive data availability, offers new prospects for the implementation of ATSC. In this study, we introduce a parallel hybrid action space reinforcement learning model (PH-DDPG) that optimizes traffic signal phase and duration of traffic signals simultaneously, eliminating the need for sequential decision-making seen in traditional two-stage models. Our model features a task-specific parallel hybrid action space tailored for adaptive traffic control, which directly outputs discrete phase selections and their associated continuous duration parameters concurrently, thereby inherently addressing dynamic traffic adaptation through unified parametric optimization. %Our model features a unique parallel hybrid action space that allows for the simultaneous output of each action and its optimal parameters, streamlining the decision-making process. Furthermore, to ascertain the robustness and effectiveness of this approach, we executed ablation studies focusing on the utilization of a random action parameter mask within the critic network, which decouples the parameter space for individual actions, facilitating the use of preferable parameters for each action. The results from these studies confirm the efficacy of this method, distinctly enhancing real-world applicability


BBPIREe algorithm
Numerical results when the sensing matrix A is well-conditioned
Numerical results when the sensing matrix A is ill-conditioned
Numerical results when q=100,n=500,t=50\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q=100,n=500,t=50$$\end{document}
Numerical results when q=200,n=1000,t=100\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q=200,n=1000,t=100$$\end{document}

+5

Global convergence of block Bregman proximal iteratively reweighted algorithm with extrapolation
  • Article
  • Publisher preview available

December 2024

·

4 Reads

Journal of Global Optimization

In this paper, we propose a Bregman proximal iteratively reweighted algorithm with extrapolation based on block coordinate update aimed at solving a class of optimization problems which is the sum of a smooth possibly nonconvex loss function and a general nonconvex regularizer with a separable structure. The proposed algorithm can be used to solve the p(0<p<1)\ell _p(0<p<1) regularization problem by employing an update strategy of the smoothing parameter in its smooth approximation model. When the extrapolation parameter satisfies certain conditions, the global convergence and local convergence rate are obtained by using the Kurdyka–Łojasiewicz (KL) property on the objective function. Numerical experiments are given to indicate the superiority of the proposed algorithm.

View access options

Gradient-based algorithms for multi-objective bi-level optimization

June 2024

·

26 Reads

Multi-Objective Bi-Level Optimization (MOBLO) addresses nested multi-objective optimization problems common in a range of applications. However, its multi-objective and hierarchical bilevel nature makes it notably complex. Gradient-based MOBLO algorithms have recently grown in popularity, as they effectively solve crucial machine learning problems like meta-learning, neural architecture search, and reinforcement learning. Unfortunately, these algorithms depend on solving a sequence of approximation subproblems with high accuracy, resulting in adverse time and memory complexity that lowers their numerical efficiency. To address this issue, we propose a gradient-based algorithm for MOBLO, called gMOBA, which has fewer hyperparameters to tune, making it both simple and efficient. Additionally, we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity. Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results. To accelerate the convergence of gMOBA, we introduce a beneficial L2O neural network (called L2O-gMOBA) implemented as the initialization phase of our gMOBA algorithm. Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.


Last-iterate convergence of modified predictive method via high-resolution differential equation on bilinear game

May 2024

·

7 Reads

This paper discusses the convergence of the modified predictive method (MPM) proposed by Liang and stokes corresponding to high-resolution differential equations (HRDE) in bilinear games. First, we present the high-resolution differential equations (MPM-HRDE) corresponding to the MPM. Then, we discuss the uniqueness of the solution for MPM-HRDE in bilinear games. Finally, we provide the convergence results of MPM-HRDE in bilinear games. The results obtained in this paper address the gap in the existing literature and extend the conclusions of related works.


A Branch and Bound Algorithm for Multiobjective Optimization Problems Using General Ordering Cones

May 2024

·

20 Reads

Many existing branch and bound algorithms for multiobjective optimization problems require a significant computational cost to approximate the entire Pareto optimal solution set. In this paper, we propose a new branch and bound algorithm that approximates a part of the Pareto optimal solution set by introducing the additional preference information in the form of ordering cones. The basic idea is to replace the Pareto dominance induced by the nonnegative orthant with the cone dominance induced by a larger ordering cone in the discarding test. In particular, we consider both polyhedral and non-polyhedral cones, and propose the corresponding cone dominance-based discarding tests, respectively. In this way, the subboxes that do not contain efficient solutions with respect to the ordering cone will be removed, even though they may contain Pareto optimal solutions. We prove the global convergence of the proposed algorithm. Finally, the proposed algorithm is applied to a number of test instances as well as to 2- to 5-objective real-world constrained problems.


Gradient-based algorithms for multi-objective bi-level optimization

May 2024

·

46 Reads

Science China Mathematics

Multi-objective bi-level optimization (MOBLO) addresses nested multi-objective optimization problems common in a range of applications. However, its multi-objective and hierarchical bi-level nature makes it notably complex. Gradient-based MOBLO algorithms have recently grown in popularity, as they effectively solve crucial machine learning problems like meta-learning, neural architecture search, and reinforcement learning. Unfortunately, these algorithms depend on solving a sequence of approximation subproblems with high accuracy, resulting in adverse time and memory complexity that lowers their numerical efficiency. To address this issue, we propose a gradient-based algorithm for MOBLO, called gMOBA, which has fewer hyperparameters to tune, making it both simple and efficient. Additionally, we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity. Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results. To accelerate the convergence of gMOBA, we introduce a beneficial L2O (learning to optimize) neural network (called L2O-gMOBA) implemented as the initialization phase of our gMOBA algorithm. Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.


Alleviating limit cycling in training GANs with an optimization technique

May 2024

·

11 Reads

Science China Mathematics

In this paper, we undertake further investigation to alleviate the issue of limit cycling behavior in training generative adversarial networks (GANs) through the proposed predictive centripetal acceleration algorithm (PCAA). Specifically, we first derive the upper and lower complexity bounds of PCAA for a general bilinear game, with the last-iterate convergence rate notably improving upon previous results. Then, we combine PCAA with the adaptive moment estimation algorithm (Adam) to propose PCAA-Adam, for practical training of GANs to enhance their generalization capability. Finally, we validate the effectiveness of the proposed algorithm through experiments conducted on bilinear games, multivariate Gaussian distributions, and the CelebA dataset, respectively.


Adaptive sampling stochastic multi-gradient algorithm
Comparison of the stochastic approximation with fixed sample sizes and ASSMG on problem 1. a Shows the values of ‖dk‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert d_{k}\Vert $$\end{document} versus the iteration number. b Shows the average sample size versus the iteration number. c Shows the values of ‖dk‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert d_{k}\Vert $$\end{document} versus the cumulative number of gradient evaluations
The values of ‖dk‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert d_{k}\Vert $$\end{document} versus the iteration number on two test problems
The final solutions obtained by three algorithms with fixed iteration on the two problems
Adaptive Sampling Stochastic Multigradient Algorithm for Stochastic Multiobjective Optimization

November 2023

·

58 Reads

·

3 Citations

Journal of Optimization Theory and Applications

In this paper, we propose an adaptive sampling stochastic multigradient algorithm for solving stochastic multiobjective optimization problems. Instead of requiring additional storage or computation of full gradients, the proposed method reduces variance by adaptively controlling the sample size used. Without the convexity assumption on the objective functions, we obtain that the proposed algorithm converges to Pareto stationary points in almost surely. We then analyze the convergence rates of the proposed algorithm. Numerical experiments are presented to demonstrate the effectiveness of the proposed algorithm.



Citations (10)


... The use of design optimization in electrical machines accomplished significant performance improvements and developments. Many researchers used deterministic methods [2][5] [15][16][17][18], stochastic [19] (e.g. genetic [20][21][22] and particle swarm [23][24][25] algorithms) methods, and machine learning (ML) methods [1] [26][27][28][29][30][31][32][33], such as Bayesian optimization (BO) algorithms [31] and [34], to improve different aspects of motor performance. ...

Reference:

A physics-informed Bayesian optimization method for rapid development of electrical machines
Adaptive Sampling Stochastic Multigradient Algorithm for Stochastic Multiobjective Optimization

Journal of Optimization Theory and Applications

... Memory gradient algorithms extend gradient descent by incorporating past gradient information and have demonstrated superior performance over the basic gradient descent method [5,10,14,[24][25][26]29]. Let f : R n → R be a smooth function, and Communicated by Sándor Zoltán Németh. ...

Memory gradient method for multiobjective optimization
  • Citing Article
  • April 2023

Applied Mathematics and Computation

... This metric facilitates more accurate modeling of distributional uncertainty, allowing for a richer representation of possible future states in scenarios [418,[428][429][430]. Researchers are also exploring discrete approximation methods to tackle the computational challenges inherent in large-scale problems. Moreover, convergence analysis is crucial to ensuring that the algorithms employed in two-stage optimization reliably reach optimal or near-optimal solutions [424,[431][432][433][434]. ...

Stochastic Approximation Methods for the Two-Stage Stochastic Linear Complementarity Problem
  • Citing Article
  • September 2022

SIAM Journal on Optimization

... For the equation system Ax = b, when the determinant of A is not equal to 0, the equation system has a unique solution, which is the vector decomposition x = k1x1 + k2x2 + … + Knxn will obtain a set of determined k1, k2, the kn value is the coordinate value of the vector x. And if the determinant of A is not equal to 0, it means that the vectors x1, x2, x3 Xn is linearly independent (Zhao, 2021;Guo, Li, & Yang, 2023;Ma et al., 2022;Aparkin, 2021). ...

Global convergence of augmented Lagrangian method applied to mathematical program with switching constraints
  • Citing Article
  • January 2022

Journal of Industrial and Management Optimization

... In recent literature, there has been an emergence of a new theoretical framework for modeling frictionless contact in thermoelastic materials, as discussed in Liu et al. (2021). This model introduces two sets of unilateral constraints: one governing normal displacement through the Signorini condition on a specified boundary portion, and the other imposing a unilateral restriction on temperature within a defined domain. ...

Existence and convergence results for a nonlinear thermoelastic contact problem

Journal of Nonlinear and Variational Analysis

... To conclude this section, we investigate a related bilateral obstacle interface problem similar to [36]. First, for the strong formulation, we modify the nonlinear partial differential equation Eq. ...

Optimal Control and Approximation for Elliptic Bilateral Obstacle Problems
  • Citing Article
  • June 2021

Communications in Nonlinear Science and Numerical Simulation

... for all i, sufficient conditions were provided for efficient solutions of MOP (1), respectively. As it was shown in [27], these constraints are inconsistent. Theorem 3.1 in [27] investigated the relation between efficient solutions of MOP (1) and optimal solutions of the scalarized problem (19). ...

A modified direction approach for proper efficiency of multiobjective optimization
  • Citing Article
  • February 2021

Optimization Methods and Software

... A SVBO problem contains a single-objective function F : X × Y → R, but the LL is a MOP f : X × Y → R m as in Definition 1 with M = 1 and m ≥ 2 (see [31,32]). Note that, other works define MOBO as in Definition 1 but considering M ≥ 2 or m ≥ 2 [29], implying that SVBO is a special case of MOBO in other definitions [29]. ...

Stability for semivectorial bilevel programs
  • Citing Article
  • January 2017

Journal of Industrial and Management Optimization

... To circumvent these difficulties, we first introduce the concept of regret measure, which is closely related to risk measure, in Sect. 2. Building upon the dual relationship between risk and regret measures established in [29], we show that the stochastic optimal control problem (8)-(9) is equivalent to a problem involving nonanticipativity constraints and an expectation measure (the transformed problem for short). In Sect. ...

Risk minimization, regret minimization and progressive hedging algorithms

Mathematical Programming