Optimization for Machine Learning

Front Cover
Suvrit Sra, Sebastian Nowozin, Stephen J. Wright
MIT Press, 2012 - Computers - 494 pages

An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.

The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.
Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.

 

Contents

Optimization and Machine Learning
1
Chapter 2 Convex Optimization with SparsityInducing Norms
19
Chapter 3 InteriorPoint Methods for LargeScale Cone Programming
55
A Survey
85
General Purpose Methods
121
Utilizing Problems Structure
149
Chapter 7 CuttingPlane Methods in Machine Learning
185
Chapter 8 Introduction to Dual Decomposition for Inference
219
Chapter 11 Projected Newtontype Methods in Machine Learning
305
Chapter 12 InteriorPoint Methods in Machine Learning
331
Chapter 13 The Tradeoffs of LargeScale Learning
351
Chapter 14 Robust Optimization in Machine Learning
369
Methods by Modeling Uncertainty
403
Chapter 16 Bandit View on Noisy Optimization
431
Chapter 17 Optimization Methods for Sparse Inverse Covariance Selection
455
Chapter 18 A Pathwise Algorithm for Covariance Selection
479

Chapter 9 Augmented Lagrangian Methods for Learning Selecting and Combining Features
255
Chapter 10 The Convex Optimization Approach to Regret Minimization
287

Other editions - View all

Common terms and phrases

About the author (2012)

Suvrit Sra is a Research Scientist at the Max Planck Institute for Biological Cybernetics, Tü bingen, Germany. Sebastian Nowozin is a Researcher in the Machine Learning and Perception group (MLP) at Microsoft Research, Cambridge, England. Stephen J. Wright is Professor in the Computer Sciences Department at the University of Wisconsin, Madison.