| Publications of Eduardo D. Sontag jointly with M. Sznaier |
| Conference articles |
| The Lojasiewicz inequality characterizes objective-value convergence along gradient flows and, in special cases, yields exponential decay of the cost. However, such results do not directly imply convergence of the state. In this paper, we use contraction theory to derive state-space guarantees for gradient systems satisfying generalized Lojasiewicz inequalities. We first show that, when the objective has a unique strongly convex minimizer, the generalized Lojasiewicz inequality implies semi-global exponential stability; on arbitrary compact subsets, this yields exponential stability. We then give two curvature-based sufficient conditions, together with constraints on the Lojasiewicz rate, under which the nonconvex gradient flow is globally incrementally exponentially stable, a property strictly stronger than global exponential stability. A few examples are presented at the end of the paper to validate the proposed theory. |
| Data-driven control (DDC), that is the design of controllers directly from observed data, has attracted substantial attention in recent years due to its advantages over model-based control. DDC avoids a computationally expensive, potentially conservative model identification step and bypasses practically difficult questions such as model order/class selection. This tutorial paper seeks to offer a sampling of the different approaches that have been recently used to synthesize data driven controllers and filters, covering both analytic approaches and learning enabled ones, indicating the relative strengths of each. A second objective is to provide a key to the rapidly expanding literature in the subject, to help researchers newly interested in this field to quickly come up to speed. |
| Systems theory can play an important in unveiling fundamental limitations of learning algorithms and architectures when used to control a dynamical system, and in suggesting strategies for overcoming these limitations. As an example, a feedforward neural network cannot stabilize a double integrator using output feedback. Similarly, a recurrent NN with differentiable activation functions that stabilizes a non-strongly stabilizable system must be itself open loop unstable, a fact that has profound implications for training with noisy, finite data. A potential solution to this problem, motivated by results on stabilization with periodic control, is the use of neural nets with periodic resets, showing that indeed systems theoretic analysis is instrumental in developing architectures capable of controlling certain classes of unstable systems. This short conference paper also argues that when the goal is to learn control oriented models, the loss function should reflect closed loop, rather than open loop model performance, a fact that can be accomplished by using gap-metric motivated loss functions. |
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
This document was translated from BibTEX by bibtex2html