A Fast Reduced-Space Algorithmic Framework for Sparse Optimization

Embargo until
Date
2018-08-20
Journal Title
Journal ISSN
Volume Title
Publisher
Johns Hopkins University
Abstract
Optimization is a crucial scientific tool used throughout applied mathematics. In optimization one typically seeks the lowest value of a chosen objective function from among a set of allowable inputs to that function, i.e., to compute a minimizer. Once an optimization problem is formulated for a particular task of interest, an algorithm for locating such a minimizer is employed. For many applications, the optimization problem may possess one or more solutions, some of which may not be desirable from the perspective of the application. In such settings, a popular approach is to augment the objective function through the use of regularization, which should be carefully chosen to ensure that solutions of the regularized optimization problem are useful to the application of interest. Perhaps the most popular type of regularization is l1-regularization, which has received special attention in the last two decades. Motivation for incorporating l1-regularization is its sparsity-inducing and shrinkage properties, which have demonstrated its utility in improving the interpretation and accuracy of model estimation in both theoretical and practical aspects. Many methods have been proposed for solving l1-regularization problems. Roughly, there are first-order methods (i.e., those that only use first-order derivatives), which have small computational iteration complexity but often are inefficient on realistic applications, and second-order methods (i.e., those that use first- and second-order derivatives), which have large computational iteration complexity but are robust and efficient in terms of the number of iterations typically required. In this thesis we present a new second-order framework that aims to balance the strengths of first-order and second-order methods. Specifically, our framework uses a limited amount of second-derivative information by making use of a mechanism for predicting those variables that will be zero at a solution. In this manner, the computational iteration complexity can be controlled. Moreover, by using second-derivative information within certain computed subspaces, our framework is highly efficient and robust in terms of the overall number of iterations typically required. We present numerical comparisons to other state-of-the-art first and second-order methods that validate our approach and further investigate an implementation of our approach that uses parallel computation.
Description
Keywords
optimization, sparsity inducing regularization
Citation