Cantitate/Preț
Produs

Simulation-based Algorithms for Markov Decision Processes: Communications and Control Engineering

Autor Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus
en Limba Engleză Paperback – 19 oct 2010
Often, real-world problems modeled by Markov decision processes (MDPs) are difficult to solve in practise because of the curse of dimensionality. In others, explicit specification of the MDP model parameters is not feasible, but simulation samples are available. For these settings, various sampling and population-based numerical algorithms for computing an optimal solution in terms of a policy and/or value function have been developed recently.
Here, this state-of-the-art research is brought together in a way that makes it accessible to researchers of varying interests and backgrounds. Many specific algorithms, illustrative numerical examples and rigorous theoretical convergence results are provided. The algorithms differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning. The algorithms can be combined with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (2) 61349 lei  6-8 săpt.
  SPRINGER LONDON – 7 mar 2015 61349 lei  6-8 săpt.
  SPRINGER LONDON – 19 oct 2010 79407 lei  6-8 săpt.
Hardback (1) 62322 lei  6-8 săpt.
  SPRINGER LONDON – 20 mar 2013 62322 lei  6-8 săpt.

Din seria Communications and Control Engineering

Preț: 79407 lei

Preț vechi: 96839 lei
-18% Nou

Puncte Express: 1191

Preț estimativ în valută:
14052 16477$ 12340£

Carte tipărită la comandă

Livrare economică 30 ianuarie-13 februarie 26

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781849966436
ISBN-10: 1849966435
Pagini: 208
Ilustrații: XVIII, 189 p. 38 illus.
Dimensiuni: 155 x 235 x 11 mm
Greutate: 0.3 kg
Ediția:Softcover reprint of hardcover 1st ed. 2007
Editura: SPRINGER LONDON
Colecția Springer
Seria Communications and Control Engineering

Locul publicării:London, United Kingdom

Public țintă

Research

Cuprins

Markov Decision Processes.- Multi-stage Adaptive Sampling Algorithms.- Population-based Evolutionary Approaches.- Model Reference Adaptive Search.- On-line Control Methods via Simulation.

Notă biografică

Steven I. Marcus received his Ph.D. and S.M. from the Massachusetts Institute of Technology in 1975 and 1972, respectively. He received a B.A. from Rice University in 1971. From 1975 to 1991, he was with the Department of Electrical and Computer Engineering at the University of Texas at Austin, where he was the L.B. (Preach) Meaders Professor in Engineering. He was Associate Chairman of the Department during the period 1984-89. In 1991, he joined the University of Maryland, College Park, where he was Director of the Institute for Systems Research until 1996. He is currently a Professor in the Electrical Engineering Department and the Institute for Systems Research.
Steven Marcus is a Fellow of IEEE, and a member of SIAM, AMS, and the Operations Research Society of America. He is an Editor of the SIAM Journal on Control and Optimization, and Associate Editor of Mathematics of Control, Signals, and Systems, Journal on Discrete Event Dynamic Systems, and Acta Applicandae Mathematicae. He has authored or co-authored more than 100 articles, conference proceedings, and book chapters.
Dr. Marcus's research interests lie in the areas of control and systems engineering, analysis and control of stochastic systems, Markov decision processes, stochastic and adaptive control, learning, fault detection, and discrete event systems, with applications in manufacturing, acoustics, and communication networks.
 
Dr. Fu received his Ph.D. and M.S degrees in applied mathematics from Harvard University in 1989 and 1986, respectively. He received S.B. and S.M. degrees in electrical engineering and an S.B. degree in mathematics from the Massachusetts Institute of Technology in 1985. Since 1989, he has been at the University of Maryland, College Park, in the College of Business and Management.
Dr. Fu is a member of IEEE and the Institute for Operations Research and the Management Sciences (INFORMS). He is the Simulation Area Editor for Operations, an Associate Editor for Management Science, and has served on the Editorial Boards of the INFORMS Journal on Computing, Production and Operations Management and IIE Transactions. He was on the program committee for the Spring 1996 INFORMS National Meeting, in charge of contributed papers. In 1995, he received the Maryland Business School's annual Allen J. Krowe Award for Teaching Excellence. He is the co-author (with Jian-Qiang Hu) of the book, Conditional Monte Carlo: Gradient Estimation and Optimization Applications (0-7923-9873-4, 1997), which received the 1998 INFORMS College on Simulation Outstanding Publication Award. Other awards include the 1999 IIE Operations Research Division Award and a 1998 IIE Transactions Best Paper Award. In 2002, he received ISR's Outstanding Systems Engineering Faculty Award.
Dr. Fu's research interests lie in the areas of stochastic derivative estimation and simulation optimization of discrete-event systems, particularly with applications towards manufacturing systems, inventory control, and the pricing of financial derivatives.

Textul de pe ultima copertă

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include:
• multi-stage adaptive sampling;
• evolutionary policy iteration;
• evolutionary random policy search; and
• model reference adaptive search.
Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality.
The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling andcontrol, and simulation but will be a valuable source of instruction and reference for students of control and operations research.

Caracteristici

Provides practical modeling methods for many real-world problems with high dimensionality or complextity which have not hitherto been treatable with Markov decision processes Rigorous theoretical derivation of sampling and population-based algorithms enables the reader to expand on the work presented in the certainty that new results will have a sound foundation First-time assimilation of many recently-developed techniques and results in a form suitable for a broad readership of researchers and students Includes supplementary material: sn.pub/extras

Recenzii

From the book reviews:
“The book consists of five chapters. … This well-written book is addressed to researchers in MDPs and applied modeling with an interests in numerical computations, but the book is also accessible to graduate students in operation research, computer science, and economics. The authors gives many pseudocodes of algorithms, numerical examples, algorithms convergence analysis and bibliographical notes that can be very helpful for readers to understand the ideas presented in the book and to perform experiments on their own.” (Wiesław Kotarski, zbMATH, Vol. 1293, 2014)