## Dynamic Programming: Sequential Scientific Management |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

Chapter 1 Discrete Dynamic Programs With a Certain Future and a Limited Horizon | 1 |

Chapter 2 Discrete Dynamic Programs With a Certain Future and an Unlimited Horizon | 80 |

Chapter 3 Discrete Dynamic Programs With a Random Future and Limited Horizon | 102 |

Chapter 4 Discrete Dynamic Programs With a Random Future and Unlimited Horizon General Case | 136 |

Chapter 5 Discrete DH Dynamic Programs With Finite Markovian Chains | 162 |

Chapter 6 Various Generalizations | 255 |

Bibliography | 269 |

277 | |

### Other editions - View all

### Common terms and phrases

accordance algorithm arcs assumption average value Bellman calculate circuit class C1 components constraints convergence corresponding criterion decision matrix decision variable decision vectors decomposed form deﬁned deﬁnition domain dynamic program elements equal equivalence class example expected value expressed ﬁnal ﬁnding ﬁnite given graph Hence inequality inﬁnite number initial Let us assume Let us consider limit Markovian chain max(O maximal method minimal notation number of periods obtained optimal policy optimal strategy optimal subpolicies optimal value ordinal function permanent strategy policies from xo problem quantity random recurring equation relation result RICHARD BELLMAN route satisﬁed second member Section sequence sequential set of possible shown in Fig shows solution stationary program stochastic matrix term theorem of optimality total present value total value transition matrix uniformly bounded uniformly convergent unireducible value per period vertex vertices whence z-transform zero