## The Linearization Method for Constrained OptimizationTechniques of optimization are applied in many problems in economics, automatic control etc. and a wealth of literature is devoted to the subject. The first computer applications involved linear programming problems with simple structure and comparatively uncomplicated nonlinear problems; these could be solved readily with the computational power of existing machines. Problems of increasing size and nonlinear complexity made it necessary to develop a complete new arsenal of methods for obtaining numerical results in a reasonable time. The Linearization Method is one of the fruits of this research of the last 20 years. It is closely related to Newton's method for solving systems of linear equations, to penalty function methods (and, hence, to methods of nondifferentiable optimization) and to variable metrics. The author of this book is one of the pioneers of this approach - a fact that is not widely known even to specialists. The book provides - for a wide readership including engineers, economists, and optimization specialists from graduate student level on - a brief yet quite complete exposition of one of the most effective methods of solution of constrained optimization problems. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

Convex and Quadratic Programming | 1 |

The Linearization Method | 43 |

The Discrete Minimax Problem and Algorithms | 99 |

Copyright | |

1 other sections not shown

### Other editions - View all

The Linearization Method for Constrained Optimization Boris N. Pshenichnyj No preview available - 2011 |

The Linearization Method for Constrained Optimization Boris N. Pshenichnyj No preview available - 1994 |

### Common terms and phrases

akpk assumptions auxiliary problem bound calculations chosen compute conditions for extrema conditions of Theorem conjugate gradient method consider constraints of problem convex function convex programming convex programming problem convex set coordinates denote dual problem eigenvalues exist numbers extremum F(xk fact fi(x fi(xk fi{x finite number fo(x fo(xk fo{x follows formula function f(x geometric progression goal function inequality initial point Is(x Is{x iteration Kuhn-Tucker vector Lagrange function Lagrange multipliers Lemma limit point linear programming linear programming problem linearization algorithm linearization method linearly independent Lipschitz condition minimization minimum necessary conditions neighbourhood NF(xk nonlinear nonsingular obtain original problem p(xk pA(x penalty function point Xk positive-definite matrix problem 2.4 problem P(0 Proof quadratic function quadratic programming problem rate of convergence satisfies the constraints satisfy a Lipschitz Section 1.2 sequence xk solution of problem solvable sufficiently large Suppose Theorem 2.1 vector with components virtue whence xk+i