Sethi / Thompson | Optimal Control Theory | E-Book | sack.de
E-Book

E-Book, Englisch, 504 Seiten, eBook

Sethi / Thompson Optimal Control Theory

Applications to Management Science and Economics

E-Book, Englisch, 504 Seiten, eBook

ISBN: 978-0-387-29903-7
Verlag: Springer US
Format: PDF
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)



Optimal control methods are used to determine optimal ways to control a dynamic system. The theoretical work in this field serves as a foundation for the book, which the authors have applied to business management problems developed from their research and classroom instruction.Sethi and Thompson have provided management science and economics communities with a thoroughly revised edition of their classic text on Optimal Control Theory. The new edition has been completely refined with careful attention to the text and graphic material presentation. Chapters cover a range of topics including finance, production and inventory problems, marketing problems, machine maintenance and replacement, problems of optimal consumption of natural resources, and applications of control theory to economics. The book contains new results that were not available when the first edition was published, as well as an expansion of the material on stochastic optimal control theory.
Sethi / Thompson Optimal Control Theory jetzt bestellen!

Zielgruppe


Professional/practitioner

Weitere Infos & Material


Preface to First Edition.- Preface to Second Edition.- What is Optimal Control Theory?.- The Maximum Principle: Continuous Time.- The Maximum Principle: Mixed Inequality Constraints.- The Maximum Principle: General Inequality Constraints.- Applications to Finance.- Applications to Production and Inventory.- Applications to Marketing.- The Maximum Principle: Discrete Time.- Maintenance and Replacement.- Applications to Natural Resources.- Economic Applications.- Differential Games, Distributed Systems, and Impulse Control.- Stochastic Optimal Control.- Solutions of Linear Differential Equations.- Calculus of Variations and Optimal Control Theory.- An Alternative Derivation of the Maximum Principle.- Special Topics in Optimal Control.- Answers to Selected Exercises.- Bibliography.- Index.- List of Figures.- List of Tables.


Chapter 13 Stochastic Optimal Control (p.341)

In previous chapters we assumed that the state variables of the system were known with certainty. If this were not the case, the state of the system over time would be a stochastic process. In addition, it might not be possible to measure the value of the state variables at time t. In this case, one would have to measure functions of the state variables. Moreover, the measurements are usually noisy, i.e., they are subject to errors. Thus, a decision maker is faced with the problem of making good estimates of these state variables from noisy measurements on functions of them.

The process of estimating the values of the state variables is called optimal filtering, In Section 13.1, we will discuss one particular filter, called the Kalman filter and its continuous-time analogue caUed the Kalman- Bucy filter. It should be noted that while optimal filtering provides optimal estimates of the value of the state variables from noisy measurements of related quantities, no control is involved.

When a control is involved, we are faced with a stochastic optimal control problem. Here, the state of the system is represented by a controlled stochastic process. In Section 13.2, we shall formulate a stochastic optimal control problem which is governed by stochastic differential equations. We shall only consider stochastic differential equations of a type known as Ito equations. These equations arise when the state equar tions, such as those we have seen in the previous chapters, are perturbed by Markov diffusion processes. Our goal in Section 13.2 will be to synthesize optimal feedback controls for systems subject to Ito equations in a way that maximizes the expected value of a given objective function. In Section 13.3, we shall extend the production planning model Chapter 6 to allow for some uncertain disturbances. We shall obtain an optimal production policy for the stochastic production planning problem thus formulated.

In Section 13.4, we solve an optimal stochastic advertising problem explicitly. The problem is a modification as well as a stochastic extension of the optimal control problem of the Vidale-Wolfe advertising model treated in Section 7.2.4.

In Section 13.5, we wiU introduce investment decisions in the consumption model of Example 1.3. We will consider both risk-free and risky investments. Our goal will be to find optimal consumption and investment policies in order to maximize the discounted value of the utility of consumption over time. In Section 13.6, we shall conclude the chapter by mentioning other types of stochastic optimal control problems that arise in practice. In particular, production planning problems where production is done by machines that are unreliable or failure-prone, can be formulated as stochastic optimal control problems involving jimip Markov processes. Such problems are treated in Sethi and Zhang (1994a, 1994c). Karatzas and Shreve (1998) address stochastic optimal control problems in finance involving more general stochastic processes including jimip processes.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.