Jing Peng’s research while affiliated with The Hong Kong Polytechnic University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Figure 6: Three types of probability weighting functions
Relative Growth Rate Optimization Under Behavioral Criterion
  • Article
  • Full-text available

October 2023

·

64 Reads

·

3 Citations

SIAM Journal on Financial Mathematics

Jing Peng

·

·

Download

Figure 6: Three types of probability weighting functions
Relative growth rate optimization under behavioral criterion

November 2022

·

85 Reads

This paper studies a continuous-time optimal portfolio selection problem in the complete market for a behavioral investor whose preference is of the prospect type with probability distortion. The investor concerns about the terminal relative growth rate (log-return) instead of absolute capital value. This model can be regarded as an extension of the classical growth optimal problem to the behavioral framework. It leads to a new type of M-shaped utility maximization problem under nonlinear Choquet expectation. Due to the presence of probability distortion, the classical stochastic control methods are not applicable. By the martingale method, concavification and quantile optimization techniques, we derive the closed-form optimal growth rate. We find that the benchmark growth rate has a significant impact on investment behaviors. Compared to Zhang et al where the same preference measure is applied to the terminal relative wealth, we find a new phenomenon when the investor's risk tolerance level is high and the market states are bad. In addition, our optimal wealth in every scenario is less sensitive to the pricing kernel and thus more stable than theirs.


A free boundary problem arising from a multi-state regime-switching stock trading model

November 2022

·

85 Reads

·

1 Citation

Journal of Differential Equations

In this paper, we study a free boundary problem, which arises from an optimal trading problem of a stock whose price is driven by unobservable market status and noise processes. The free boundary problem is a variational inequality system of three functions with a degenerate operator. We prove that all the four switching free boundaries are no-overlapping, monotonic and C∞-smooth by the approximation method. We also completely determine their relative localities and provide the optimal trading strategies for the stock trading problem.


Stochastic Linear Quadratic Optimal Control Problem: A Reinforcement Learning Method

September 2022

·

134 Reads

·

46 Citations

IEEE Transactions on Automatic Control

This article adopts a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where the drift and diffusion terms in the dynamics may depend on both the state and control. Based on the Bellman's dynamic programming principle, we presented an online RL algorithm to attain optimal control with partial system information. This algorithm computes the optimal control, rather than estimates the system coefficients, and solves the related Riccati equation. It only requires local trajectory information, which significantly simplifies the calculation process. We shed light on our theoretical findings using two numerical examples.


Stochastic Linear Quadratic Optimal Control Problem: A Reinforcement Learning Method

September 2021

·

227 Reads

This paper applies a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where drift and diffusion terms in the dynamics may depend on both the state and control. Based on Bellman's dynamic programming principle, an online RL algorithm is presented to attain the optimal control with just partial system information. This algorithm directly computes the optimal control rather than estimating the system coefficients and solving the related Riccati equation. It just requires local trajectory information, greatly simplifying the calculation processing. Two numerical examples are carried out to shed light on our theoretical findings. Index Terms-Reinforcement learning, stochastic optimal control, linear quadratic problem.


A free boundary problem arising from a multi-state regime-switching stock trading model

August 2020

·

92 Reads

In this paper, we study a free boundary problem, which arises from an optimal trading problem of a stock that is driven by a uncertain market status process. The free boundary problem is a variational inequality system of three functions with a degenerate operator. The main contribution of this paper is that we not only prove all the four switching free boundaries are no-overlapping, monotonic and CC^{\infty}-smooth, but also completely determine their relative localities and provide the optimal trading strategies for the stock trading problem.

Citations (3)


... Dai et al. (2021) proposes a dynamic portfolio choice model with the mean-variance criterion for logarithmic returns, whose portfolio policies conform with conventional investment wisdom. Peng et al. (2023) considers the cumulative prospect theory and extends the classical growth optimal problem to the behavioral framework. Our research focuses on utilizing logarithmic returns to analyze the competitive dynamics among fund managers. ...

Reference:

N-player and mean field games among fund managers considering excess logarithmic returns
Relative Growth Rate Optimization Under Behavioral Criterion

SIAM Journal on Financial Mathematics

... It is noteworthy that if z ≤ 0, we only need to set z = −e α . Employing the same Fourier transform and its inverse, it can be verified that the final result remains identical to the case when z > 0. Consequently, (19) holds for any z ∈ R. ...

A free boundary problem arising from a multi-state regime-switching stock trading model

Journal of Differential Equations

... Relevant results may be found in [38], where the stabilizing solution of an indefinite SARE is approximated through a sequence of SAREs with a negative semidefinite quadratic term, referred to as H 2 -type SAREs. On the other hand, for solving an H 2 -type SARE with a positive state weighting matrix, [39] proposed a PI method and further employed the IRL technique to partially eliminate the requirement for an exact dynamical model. Inspired by this work, [40] extended the approach to a fully model-free version and relaxed the state weighting matrix to be positive semidefinite. ...

Stochastic Linear Quadratic Optimal Control Problem: A Reinforcement Learning Method

IEEE Transactions on Automatic Control