Li Jiang’s research while affiliated with Xi’an University of Posts and Telecommunications and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Location of simulation environment generation.
Data sequence state-space.
Potential-based reward: (a) Obstacle-based potential plane; (b) Arrival point-based potential plane; (c) Boundary control-based potential plane.
Potential plane based on obstacle, arrival point, and boundary control (superimposed).
Actor network structure.

+10

An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
  • Article
  • Full-text available

July 2023

·

45 Reads

·

8 Citations

Qianhao Xiao

·

Li Jiang

·

Manman Wang

·

Xin Zhang

Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and unreasonable design of reward function. In this paper, an environment framework is designed, which is constructed using the Box2D physics engine and employs a reward function, with the distance between the agent and arrival point as the main, and the potential field superimposed by boundary control, obstacles, and arrival point as the supplement. We also employ the state-of-the-art PPO (Proximal Policy Optimization) algorithm as a baseline for global path planning to address the issue of incomplete ship navigation power propulsion strategy. Additionally, a Beta policy-based distributed sample collection PPO algorithm is proposed to overcome the problem of unbalanced sample collection in path planning by dividing sub-regions to achieve distributed sample collection. The experimental results show the following: (1) The distributed sample collection training policy exhibits stronger robustness in the PPO algorithm; (2) The introduced Beta policy for action sampling results in a higher path planning success rate and reward accumulation than the Gaussian policy at the same training time; (3) When planning a path of the same length, the proposed Beta policy-based distributed sample collection PPO algorithm generates a smoother path than traditional path planning algorithms, such as A*, IDA*, and Dijkstra.

Download






Citations (4)


... In this paper, we revisit this bias as "boundary action bias" and analyze its nature in PPO with illustrative explanations and formalization. A fundamental solution to mitigate the bias is using bounded distributions to avoid out-ofbound samples, such as Beta (Chou, Maturana, and Scherer 2017;Petrazzini and Antonelo 2021;Xiao et al. 2023) or logit-normal distribution (squashed Gaussian) (Haarnoja et al. 2018b; Ciosek and Whiteson 2020; Jang 2021) employed in prior studies. However, the basic shapes and properties of such distributions differ from Gaussian to some extent. ...

Reference:

Truncated Gaussian Policy for Debiased Continuous Control
An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme

... By linking the "flow out" of the primary separator to the "flow in" of the secondary separator, a coupled model of the liquid level height for the two-stage series separators is developed using the mathematical model of a single separator previously established. The coupled model of the liquid level is illustrated in Figure 4. improve the algorithm's convergence speed, it incorporates the experience replay mechanism from DQN and employs strategic sampling to minimize data point correlations [27][28][29]. ...

Vehicle Driving Longitudinal Control Based on Double Deep Q Network
  • Citing Conference Paper
  • January 2022

... DDQN enhances RL but faces challenges with training stability and slow convergence, particularly in complex environments. Techniques such as prioritized experience replay (Song et al., 2021), reward shaping (Xiao et al., 2022), dynamic ε-greedy strategies (Ding et al., 2023), dueling architectures (He et al., 2023), and proximal policy optimization (PPO) (Schulman et al., 2017) have proven effective in improving stability and convergence. These methods ensure the efficient exploration of all feasible states but involve trade-offs, including increased computational complexity and the need for careful hyperparameter optimization. ...

Design of Reward Functions Based on The DDQN Algorithm
  • Citing Conference Paper
  • January 2022