Introductory Lectures on Convex Optimization: A Basic Course
Springer Science & Business Media, 1 Ara 2013 - 236 sayfa
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph .
Kullanıcılar ne diyor? - Eleştiri yazın
Her zamanki yerlerde hiçbir eleştiri bulamadık.
SMOOTH CONVEX OPTIMIZATION
NONSMOOTH CONVEX OPTIMIZATION
Diğer baskılar - Tümünü görüntüle
a e Q approximation assume assumption called closed and convex closed convex function closed convex set Compute conjugate gradients method Consider the function constrained minimization problem convex optimization coordinate vector COROLLARY defined definition Denote differentiable function efficiency estimate ellipsoid method equation example f(xk fo(a function f functional constraints gradient mapping gradient method Hessian inequality int Q interior-point methods kth iteration Let f Let us fix Let us look Let us prove level method linear Lipschitz continuous local minimum lower bound lower complexity bounds minimization scheme Newton method nonlinear optimization nonsmooth Note objective function obtain optimal value optimization methods optimization problems oracle parameter path-following scheme problem class Proof properties quadratic function rate of convergence satisfying self-concordant barrier self-concordant function set Q solution solve subdifferential subgradient method test point tion unconstrained minimization vector view of Lemma view of Theorem voln