A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning

Alborz Geramifard

ISBN 10: 1601987609 ISBN 13: 9781601987600
Verlag: Now Publishers Inc, 2013
Neu Taschenbuch

Verkäufer AHA-BUCH GmbH, Einbeck, Deutschland Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

AbeBooks-Verkäufer seit 14. August 2006


Beschreibung

Beschreibung:

nach der Bestellung gedruckt Neuware - Printed after ordering - A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable. Bestandsnummer des Verkäufers 9781601987600

Diesen Artikel melden

Inhaltsangabe:

A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable.

Reseña del editor: A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance. This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable.

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

Bibliografische Details

Titel: A Tutorial on Linear Function Approximators ...
Verlag: Now Publishers Inc
Erscheinungsdatum: 2013
Einband: Taschenbuch
Zustand: Neu

Beste Suchergebnisse bei AbeBooks

Beispielbild für diese ISBN

Geramifard, Alborz; Walsh, Thomas J; Stefanie, Tellex; Chowdhary, Girish; Roy, Nicholas; How, Jonathan P
Verlag: Now Publishers, 2013
ISBN 10: 1601987609 ISBN 13: 9781601987600
Gebraucht Softcover

Anbieter: Hay-on-Wye Booksellers, Hay-on-Wye, HEREF, Vereinigtes Königreich

Verkäuferbewertung 5 von 5 Sternen 5 Sterne, Erfahren Sie mehr über Verkäufer-Bewertungen

Zustand: Very Good. Unused, some outer edges have minor scuffs, cover has light scratches, some outer pages have marks from shelf wear, book content is in like new condition. Bestandsnummer des Verkäufers 101703-7

Verkäufer kontaktieren

Gebraucht kaufen

EUR 29,61
EUR 73,99 shipping
Versand von Vereinigtes Königreich nach USA

Anzahl: 1 verfügbar

In den Warenkorb