Numerical Methods for Constrained OptimizationPhilip E. Gill, P. E. Gill, William Allan Murray, Institute of Mathematics and Its Applications, National Physical Laboratory (Great Britain) |
From inside the book
Results 1-3 of 49
Page 6
... implies that g'p is non - negative , so it is not possible to satisfy conditions ( 1.2.4 ) and ( 1.2.5 ) when g is in the cone . Conversely , when g is not in the cone , we let h be the point of the cone that is closest to g in the ...
... implies that g'p is non - negative , so it is not possible to satisfy conditions ( 1.2.4 ) and ( 1.2.5 ) when g is in the cone . Conversely , when g is not in the cone , we let h be the point of the cone that is closest to g in the ...
Page 18
... imply that g ) is not a linear combination of the vectors a , ( i = 1 , 2 , . . . , t ā 1 ) . Therefore Theorem 1.4 implies that F ( x ( * ) + p * ) ) is less than F ( x ) ) . Since F ( x ) is a positive - definite quadratic function ...
... imply that g ) is not a linear combination of the vectors a , ( i = 1 , 2 , . . . , t ā 1 ) . Therefore Theorem 1.4 implies that F ( x ( * ) + p * ) ) is less than F ( x ) ) . Since F ( x ) is a positive - definite quadratic function ...
Page 116
... implies that the sub - diagonal elements of the rth column of N1 are given by - v ; / v ,, j = r + 1 , ... , n . We can rewrite equation ( 4.5.3 ) as = L`A = ( I + wy ) NN - U = = N1 ( I + we , ) N1 - 1U , where wv , N , 1w . The matrix ...
... implies that the sub - diagonal elements of the rth column of N1 are given by - v ; / v ,, j = r + 1 , ... , n . We can rewrite equation ( 4.5.3 ) as = L`A = ( I + wy ) NN - U = = N1 ( I + we , ) N1 - 1U , where wv , N , 1w . The matrix ...
Common terms and phrases
active constraints active set algorithm barrier function basis Chapter Cholesky factors compute condition number dā defined deleted descent direction diagonal direct-search direction of search efficient equality constraints equations estimates evaluations feasible point feasible region fill-in Fletcher formula Gill and Murray given Goldfarb gradient Hessian matrix implies inactive constraints inverse iteration jth column Lagrange multipliers Lagrangian function linear constraints linear programming linearly constrained problem linearly independent LQ factorization matrix G minimize F(x modified Newton method non-singular non-zero elements nonlinear constraints nonlinear programming objective function obtained orthogonal parameter penalty function positive definite procedure projection quadratic approximation quadratic function quadratic programming quasi-Newton methods rate of convergence reduced rows scalar search direction second derivatives second-order Section sequence simplex solution solving sparse sparse matrix stationary point step steplength storage strategy strong local minimum techniques Theorem unconstrained minimization updating variables vector vertex zero