Methods of optimization
Nonlinear programming. Kuhn-tucker necessary conditions. Saddle-point property of the lagrangian functions. The constraint qualification. Search methods for unconstrained optimization. Grild search. Hooke and jeeves' method. Spendley, hext and himsworth's method. Nelder and mead's method. Gradient methods for unconstrained optimization. Method of steepest descent. The newton-raphson method. The davidon-fletcher-powell method. Constrained optimization. Hemstitching. The gradient projection method. Penalty functions. Dynamic programming The allocation problem. Oriented networks. The farmer's problem.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Search Methods for Unconstrained Optimization
5 other sections not shown
assumed classical optimization complementary DFP concave function constrained local maximum constraint boundary constraint qualification convergence convex function convex set current point defined derivatives DFP method direction of search dynamic programming equality constraints equation evaluations Example function f function value given global maximum Golden Section search gradient methods Hence Hessian matrix inequality constraints initial point interval of uncertainty iteration Kuhn-Tucker necessary conditions Lagrange multipliers Lagrangian function linear programming problem linear searches maximize maximum value minimal path minimizes f(x minimizing problem minimum mutually conjugate directions node non-negativity restrictions nonlinear programming nonlinear programming problem objective function obtain optimal point optimal solution optimization problem optimization technique positive definite Powell's method problem 2.1 proof quadratic function quadratic programming problem replaced saddle-point satisfies the constraints search direction Section sequence solve step lengths suff1cient Suppose surplus variables Theorem theory unconstrained optimization unimodal function unrestricted in sign vector x'Dx