BACK TO INDEX

Publications about 'overparameterization'
Articles in journal or book chapters
  1. A.C.B de Oliveira, D.D. Jatkar, and E.D. Sontag. On the convergence of overparameterized problems: Inherent properties of the compositional structure of neural networks. 2025. Note: Submitted. Also arXiv:2511.09810 [cs.LG]. [WWW] Keyword(s): neural networks, optimization, gradient methods, overparameterization.
    Abstract:
    This paper investigates how the compositional structure of neural networks shapes their optimization landscape and training dynamics. We analyze the gradient flow associated with overparameterized optimization problems, which can be interpreted as training a neural network with linear activations. Remarkably, we show that the global convergence properties can be derived for any cost function that is proper and real analytic. We then specialize the analysis to scalar-valued cost functions, where the geometry of the landscape can be fully characterized. In this setting, we demonstrate that key structural features -- such as the location and stability of saddle points -- are universal across all admissible costs, depending solely on the overparameterized representation rather than on problem-specific details. Moreover, we show that convergence can be arbitrarily accelerated depending on the initialization, as measured by an imbalance metric introduced in this work. Finally, we discuss how these insights may generalize to neural networks with sigmoidal activations, showing through a simple example which geometric and dynamical properties persist beyond the linear case.


Conference articles
  1. A.C.B de Oliveira, M. Siami, and E.D. Sontag. Dynamics and perturbations of overparameterized linear neural networks. In Proc. 2023 62st IEEE Conference on Decision and Control (CDC), pages 7356-7361, 2023. Note: Extended version is On the ISS property of the gradient flow for single hidden-layer neural networks with linear activations, arXiv https://arxiv.org/abs/2305.09904. [PDF] [doi:10.1109/CDC49753.2023.10383478] Keyword(s): neural networks, overparametrization, gradient descent, input to state stability, gradient systems.
    Abstract:
    Recent research in neural networks and machine learning suggests that using many more parameters than strictly required by the initial complexity of a regression problem can result in more accurate or faster-converging models -- contrary to classical statistical belief. This phenomenon, sometimes known as ``benign overfitting'', raises questions regarding in what other ways might overparameterization affect the properties of a learning problem. In this work, we investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation. This uncertainty arises naturally if the gradient is estimated from noisy data or directly measured. Our object of study is a linear neural network with a single, arbitrarily wide, hidden layer and an arbitrary number of inputs and outputs. In this paper we solve the problem for the case where the input and output of our neural-network are one-dimensional, deriving sufficient conditions for robustness of our system based on necessary and sufficient conditions for convergence in the undisturbed case. We then show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized, and discuss directions of future work that might extend our current results for more general formulations.



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.




Last modified: Thu Nov 27 22:38:40 2025
Author: sontag.


This document was translated from BibTEX by bibtex2html