## Optimal Control: An Introduction to the Theory with ApplicationsSystems that evolve with time occur frequently in nature and modelling the behavior of such systems provides an important application of mathematics. These systems can be completely deterministic, but it may be possible too to control their behavior by intervention through "controls." The theory of optimal control is concerned with determining such controls which, at minimum cost, either direct the system along a given trajectory or enable it to reach a given point in its state space. This textbook is a straightforward introduction to the theory of optimal control with an emphasis on presenting many different applications. Professor Hocking has taken pains to ensure that the theory is developed to display the main themes of the arguments but without using sophisticated mathematical tools. Problems in this setting can arise across a wide range of subjects and there are illustrative examples of systems from fields as diverse as dynamics, economics, population control, and medicine. Throughout there are many worked examples, and numerous exercises (with solutions) are provided. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

OPTIMAL CONTROL PROBLEMS | 1 |

SYSTEMS OF DIFFERENTIAL EQUATIONS | 14 |

PART A TIMEOPTIMAL CONTROL OF LINEAR | 29 |

TIMEOPTIMAL CONTROL | 46 |

Exercises 4 | 64 |

Exercises 5 | 79 |

Exercises 6 | 97 |

Exercises 7 | 111 |

OUTLINE SOLUTIONS TO THE EXERCISES Exercises 1 | 217 |

Exercises 3 | 219 |

Exercises 4 | 222 |

Exercises 5 | 224 |

Exercises 6 | 226 |

Exercises 7 | 229 |

Exercises 8 | 233 |

Exercises 9 | 236 |

### Common terms and phrases

admissible controls bang-bang control boundary centre Chapter co-state equations co-state variables co-state vector components consider constant control set control variables convex sets cost function cost is given defined differential equations eigenvalues equal equilibrium example Exercise exp(At extreme values final conditions final value Find the optimal fixed follows free-time problem fundamental matrix given by x2 Hamiltonian harmonic oscillator Hence infimum initial and final initial condition initial value interval linear mathematical methods n-dimensional nonlinear normal optimal catch optimal control problems optimal control theory optimal cost optimal solution optimal trajectory perturbation cone Pontryagin maximum principle positioning problem quadratic reach the target reachable set restriction Runge-Kutta method sinh solve supporting hyperplane Suppose supremum of H switch target is reached target point target set terminal cost time-optimal control time-optimal problems transversality condition unique value of x2 zero