Dynamic Programming: Models and Applications
Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, decision-making in the face of an uncertain future, and inventory control models. A prior course in operations research is prerequisite. 1982 edition.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
chapter commodity computation concave concave function Consider constraints convex function decision maker decision problem decreasing deﬁned deﬁnition denote dynamic programming elementary chain Exercise feasible ﬂow Figure ﬁnal ﬁnd ﬁnite ﬁrst ﬁxed function g functional equation g is convex grid hacklogging heap Hence inﬁnite integer inventory control K-convex knapsack knapsack problem kth shortest paths label correction Lagrange multipliers Lemma length linear programming lnterpret log-convex longest path lower semicontinuous marginal analysis Markov maximize method minimize N X l vector node 9 nondecreasing nonnegative Operations Research optimal ﬂow optimal policy optimization problems path from node period planning horizon pointer Poisson policy iteration principle of optimality production proﬁt prohability quasi-convex random variable real numbers recursive ﬁxing reﬂects resource reward satisﬁes sequence sequential decision processes shortest paths Show solution solve speciﬁed stage successive approximation sufﬁces Suppose transition units veriﬁes