Cantitate/Preț
Produs

Controlled Diffusion Processes (Stochastic Modelling and Applied Probability, nr. 14)

Traducere de A.B. Aries De (autor)
Notă GoodReads:
en Limba Engleză Hardback – 12 Nov 1980
Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.
Citește tot Restrânge
Toate formatele și edițiile
Toate formatele și edițiile Preț Express
Paperback (2) 40766 lei  7-9 săpt. +22192 lei  10-20 zile
  Springer – 12 Oct 2011 40766 lei  7-9 săpt. +22192 lei  10-20 zile
  Springer Berlin, Heidelberg – 15 Oct 2008 45723 lei  7-9 săpt. +23489 lei  10-20 zile
Hardback (1) 73757 lei  7-9 săpt. +48753 lei  10-20 zile
  Springer – 12 Nov 1980 73757 lei  7-9 săpt. +48753 lei  10-20 zile

Din seria Stochastic Modelling and Applied Probability

Preț: 73757 lei

Preț vechi: 79309 lei
-7%

Puncte Express: 1106

Preț estimativ în valută:
14362 14804$ 12175£

Carte tipărită la comandă

Livrare economică 05-19 octombrie
Livrare express 27 august-06 septembrie pentru 49752 lei

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780387904610
ISBN-10: 0387904611
Pagini: 308
Ilustrații: XII, 308 p.
Dimensiuni: 155 x 235 x 19 mm
Greutate: 0.63 kg
Ediția: 1980
Editura: Springer
Colecția Springer
Seria Stochastic Modelling and Applied Probability

Locul publicării: New York, NY, United States

Public țintă

Research

Cuprins

1 Introduction to the Theory of Controlled Diffusion Processes.- 1. The Statement of Problems—Bellman’s Principle—Bellman’s Equation.- 2. Examples of the Bellman Equations—The Normed Bellman Equation.- 3. Application of Optimal Control Theory—Techniques for Obtaining Some Estimates.- 4. One-Dimensional Controlled Processes.- 5. Optimal Stopping of a One-Dimensional Controlled Process.- Notes.- 2 Auxiliary Propositions.- 1. Notation and Definitions.- 2. Estimates of the Distribution of a Stochastic Integral in a Bounded Region.- 3. Estimates of the Distribution of a Stochastic Integral in the Whole Space.- 4. Limit Behavior of Some Functions.- 5. Solutions of Stochastic Integral Equations and Estimates of the Moments.- 6. Existence of a Solution of a Stochastic Equation with Measurable Coefficients.- 7. Some Properties of a Random Process Depending on a Parameter.- 8. The Dependence of Solutions of a Stochastic Equation on a Parameter.- 9. The Markov Property of Solutions of Stochastic Equations.- 10. Ito’s Formula with Generalized Derivatives.- Notes.- 3 General Properties of a Payoff Function.- 1. Basic Results.- 2. Some Preliminary Considerations.- 3. The Proof of Theorems 1.5–1.7.- 4. The Proof of Theorems 1.8–1.11 for the Optimal Stopping Problem.- Notes.- 4 The Bellman Equation.- 1. Estimation of First Derivatives of Payoff Functions.- 2. Estimation from Below of Second Derivatives of a Payoff Function.- 3. Estimation from Above of Second Derivatives of a Payoff Function.- 4. Estimation of a Derivative of a Payoff Function with Respect to t.- 5. Passage to the Limit in the Bellman Equation.- 6. The Approximation of Degenerate Controlled Processes by Nondegenerate Ones.- 7. The Bellman Equation.- Notes.- 5 The Construction of ?-Optimal Strategies.- 1. ?-Optimal Markov Strategies and the Bellman Equation.- 2. ?-Optimal Markov Strategies. The Bellman Equation in the Presence of Degeneracy.- 3. The Payoff Function and Solution of the Bellman Equation: The Uniqueness of the Solution of the Bellman Equation.- Notes.- 6 Controlled Processes with Unbounded Coefficients: The Normed Bellman Equation.- 1. Generalization of the Results Obtained in Section 3.1.- 2. General Methods for Estimating Derivatives of Payoff Functions.- 3. The Normed Bellman Equation.- 4. The Optimal Stopping of a Controlled Process on an Infinite Interval of Time.- 5. Control on an Infinite Interval of Time.- Notes.- Appendices.- 1. Some Properties of Stochastic Integrals.- 2. Some Properties of Submartingales.

Recenzii

From the reviews:
“The book treats a large class of fully nonlinear parabolic PDEs via probabilistic methods. … The monograph may be strongly recommended as an excellent reading to PhD students, postdocs et al working in the area of controlled stochastic processes and/or nonlinear partial differential equations of the second order. … recommended to a wider audience of all students specializing in stochastic analysis or stochastic finance starting from MSc level.” (Alexander Yu Veretennikov, Zentralblatt MATH, Vol. 1171, 2009)

Textul de pe ultima copertă

This book deals with the optimal control of solutions of fully observable Itô-type stochastic differential equations. The validity of the Bellman differential equation for payoff functions is proved and rules for optimal control strategies are developed.
Topics include optimal stopping; one dimensional controlled diffusion; the Lp-estimates of stochastic integral distributions; the existence theorem for stochastic equations; the Itô formula for functions; and the Bellman principle, equation, and normalized equation.

Caracteristici

Includes supplementary material: sn.pub/extras