| Publications about 'policy optimization' |
| Articles in journal or book chapters |
| This article investigates the robustness of gradient descent algorithms under perturbations. The concept of small-disturbance input-to-state stability (ISS) for discrete-time nonlinear dynamical systems is introduced, along with its Lyapunov characterization. The conventional linear \emph{Polyak-\L{}ojasiewicz} (PL) condition is then extended to a nonlinear version, and it is shown that the gradient descent algorithm is small-disturbance ISS provided the objective function satisfies the generalized nonlinear PL condition. This small-disturbance ISS property guarantees that the gradient descent algorithm converges to a small neighborhood of the optimum under sufficiently small perturbations. As a direct application of the developed framework, we demonstrate that the LQR cost satisfies the generalized nonlinear PL condition, thereby establishing that the policy gradient algorithm for LQR is small-disturbance ISS. Additionally, other popular policy gradient algorithms, including natural policy gradient and Gauss-Newton method, are also proven to be small-disturbance ISS. |
| Solutions of optimization problems, including policy optimization in reinforcement learning, typically rely upon some variant of gradient descent. There has been much recent work in the machine learning, control, and optimization communities applying the Polyak-Łojasiewicz Inequality (PLI) to such problems in order to establish an exponential rate of convergence (a.k.a. ``linear convergence'' in the local-iteration language of numerical analysis) of loss functions to their minima under the gradient flow. Often, as is the case of policy iteration for the continuous-time LQR problem, this rate vanishes for large initial conditions, resulting in a mixed globally linear / locally exponential behavior. This is in sharp contrast with the discrete-time LQR problem, where there is global exponential convergence. That gap between CT and DT behaviors motivates the search for various generalized PLI-like conditions, and this paper addresses that topic. Moreover, these generalizations are key to understanding the transient and asymptotic effects of errors in the estimation of the gradient, errors which might arise from adversarial attacks, wrong evaluation by an oracle, early stopping of a simulation, inaccurate and very approximate digital twins, stochastic computations (algorithm ``reproducibility''), or learning by sampling from limited data. We describe an ``input to state stability'' (ISS) analysis of this issue. We also discuss convergence and PLI-like properties of ``linear feedforward neural networks'' in feedback control. Much of the work described here was done in collaboration with Arthur Castello B. de Oliveira, Leilei Cui, Zhong-Ping Jiang, and Milad Siami. This is a short paper summarizing the slides presented at my keynote at the 2025 L4DC (Learning for Dynamics \& Control Conference) in Ann Arbor, Michigan, 05 June 2025. A partial bibliography has been added. |
| This paper studies the effect of perturbations on the gradient flow of a general constrained nonlinear programming problem, where the perturbation may arise from inaccurate gradient estimation in the setting of data-driven optimization. Under suitable conditions on the objective function, the perturbed gradient flow is shown to be small-disturbance input-to-state stable (ISS), which implies that, in the presence of a small-enough perturbation, the trajectory of the perturbed gradient flow must eventually enter a small neighborhood of the optimum. This work was motivated by the question of robustness of direct methods for the linear quadratic regulator problem, and specifically the analysis of the effect of perturbations caused by gradient estimation or round-off errors in policy optimization. Interestingly, we show small-disturbance ISS for three of the most common optimization algorithms: standard gradient flow, natural gradient flow, and Newton gradient flow. |
| Conference articles |
| This paper studies gradient dynamics subject to additive stochastic noise, which may arise from sources such as stochastic gradient estimation, measurement noise, or stochastic sampling errors. To analyze the robustness of such stochastic gradient systems, the concept of small-covariance noise-to-state stability (NSS) is introduced, along with a Lyapunov-based characterization. Furthermore, the classical Polyak–Lojasiewicz (PL) condition on the objective function is generalized to the $\mathcal{K}$-PL condition via comparison functions, thereby extending its applicability to a broader class of optimization problems. It is shown that the stochastic gradient dynamics exhibit small-covariance NSS if the objective function satisfies the $\mathcal{K}$-PL condition and possesses a globally Lipschitz continuous gradient. This result implies that the trajectories of stochastic gradient dynamics converge to a neighborhood of the optimum with high probability, with the size of the neighborhood determined by the noise covariance. Moreover, if the $\mathcal{K}$-PL condition is strengthened to a $\mathcal{K}_\infty$-PL condition, the dynamics are NSS; whereas if it is weakened to a general positive-definite-PL condition, the dynamics exhibit integral NSS. The results further extend to objectives without globally Lipschitz gradients through appropriate step-size tuning. The proposed framework is further applied to the robustness analysis of policy optimization for the linear quadratic regulator (LQR) and logistic regression. |
| In this work we study the convergence of gradient methods for nonconvex optimization problems -- specifically the effect of the problem formulation to the convergence behavior of the solution of a gradient flow. We show through a simple example that, surprisingly, the gradient flow solution can be exponentially or asymptotically convergent, depending on how the problem is formulated. We then deepen the analysis and show that a policy optimization strategy for the continuous-time linear quadratic regulator (LQR) (which is known to present only asymptotic convergence globally) presents almost global exponential convergence if the problem is overparameterized through a linear feed-forward neural network (LFFNN). We prove this qualitative improvement always happens for a simplified version of the LQR problem and derive explicit convergence rates for the gradient flow. Finally, we show that both the qualitative improvement and the quantitative rate gains persist in the general LQR through numerical simulations. |
| This work explores generalizations of the Polyak-Lojasiewicz inequality (PLI) and their implications for the convergence behavior of gradient flows in optimization problems. Motivated by the continuous-time linear quadratic regulator (CT-LQR) policy optimization problem -- where only a weaker version of the PLI is characterized in the literature -- this work shows that while weaker conditions are sufficient for global convergence to, and optimality of the set of critical points of the cost function, the "profile" of the gradient flow solution can change significantly depending on which "flavor" of inequality the cost satisfies. After a general theoretical analysis, we focus on fitting the CT-LQR policy optimization problem to the proposed framework, showing that, in fact, it can never satisfy a PLI in its strongest form. We follow up our analysis with a brief discussion on the difference between continuous- and discrete-time LQR policy optimization, and end the paper with some intuition on the extension of this framework to optimization problems with L1 regularization and solved through proximal gradient flows. |
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
This document was translated from BibTEX by bibtex2html