AUGMENTED LAGRANGIAN BASED ALGORITHMS FOR NONCONVEX OPTIMIZATION WITH APPLICATIONS IN SUBSPACE CLUSTERING

Embargo until
Date
2016-06-03
Journal Title
Journal ISSN
Volume Title
Publisher
Johns Hopkins University
Abstract
This thesis considers the design, analysis, and implementation of algorithms for nonconvex optimization that utilize the augmented Lagrangian function. In the first part of the thesis, we address general nonconvex optimization problems that have smooth objective and constraint functions. Observing a potential drawback of a traditional augmented Lagrangian (AL) method, we propose adaptive trust-region and linesearch AL algorithms that use the same novel feature, namely, an adaptive update for the penalty parameter. As with a traditional AL algorithm, the adaptive methods are matrix-free (i.e., they do not need to form or factorize problem matrices) and thus represent a viable option for solving large-scale problems. We prove global convergence for our adaptive AL algorithms and illustrate through extensive numerical experiments that our methods outperform traditional AL methods in terms of efficiency and reliability. In the latter part of the thesis, we focus on a structured nonconvex nonsmooth problem arising from a machine learning application called subspace clustering. We show that the alternating direction method of multipliers (ADMM), which may roughly be described as an application of block-wise coordinate minimization of the AL function, is well suited for this machine learning task. Moreover, we establish a global convergence result for the algorithm since it was previously unknown. Numerical results are presented to show that the chosen optimization modeling formulation together with ADMM can achieve subspace clustering accuracy that is on par with other state-of-the-art methods.
Description
Keywords
nonlinear optimization, augmented Lagrangian, ADMM, subspace clustering
Citation