Dynamic programming: models and applications
Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, decision-making in the face of an uncertain future, and inventory control models. A prior course in operations research is prerequisite. 1982 edition.
What people are saying - Write a review
We haven't found any reviews in the usual places.
ALLOCATION MARGINAL ANALYSIS
A MARKOV DECISION MODEL
5 other sections not shown
Other editions - View all
allocation model backlogging chapter commodity computation concave concave function Consider constraints convex function decision maker decreasing deﬁned deﬁnition denote dynamic programming efﬁcient elementary chain elements Exercise feasible ﬂow FIFO Figure ﬁnal ﬁnd ﬁnite ﬁrst ﬁxed function g functional equation g is convex grid heap Hence I-lint inequality inﬁnite integer interval K-convex knapsack knapsack problem kth shortest paths label correction Lagrange multipliers Lemma length linear programming log-convex longest path lower semicontinuous marginal analysis maximize method minimize node 9 nondecreasing nonnegative optimal ﬂow optimal policy optimization problems path from node period planning horizon pointer Poisson Poisson process policy iteration present value principle of optimality production proﬁt proof of Theorem quasi-convex random variable real numbers recursive ﬁxing reﬂects resource reward right-hand side S)-policy satisﬁes sequence sequential decision processes shortest paths Show solution solve speciﬁed stage successive approximation Suppose units vector veriﬁes