169,73 €
188,59 €
-10% with code: EXTRA
A Tutorial on Meta-Reinforcement Learning
A Tutorial on Meta-Reinforcement Learning
169,73
188,59 €
  • We will send in 10–14 business days.
While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL considers a family of machine learning (ML) methods that learn to reinforcement…
188.59
  • Publisher:
  • ISBN-10: 1638285403
  • ISBN-13: 9781638285403
  • Format: 15.6 x 23.4 x 1 cm, minkšti viršeliai
  • Language: English
  • SAVE -10% with code: EXTRA

A Tutorial on Meta-Reinforcement Learning (e-book) (used book) | bookbook.eu

Reviews

Description

While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL considers a family of machine learning (ML) methods that learn to reinforcement learn. That is, meta-RL methods use sample-inefficient ML to learn sample-efficient RL algorithms, or components thereof. Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible. In this monograph, the meta-RL problem setting is described in detail as well as its major variations. At a high level the book discusses how meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task. Using these clusters, the meta-RL algorithms and applications are surveyed. The monograph concludes by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.

EXTRA 10 % discount with code: EXTRA

169,73
188,59 €
We will send in 10–14 business days.

The promotion ends in 23d.19:08:12

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 1,89 Book Euros!?
  • Author: Jacob Beck
  • Publisher:
  • ISBN-10: 1638285403
  • ISBN-13: 9781638285403
  • Format: 15.6 x 23.4 x 1 cm, minkšti viršeliai
  • Language: English English

While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL considers a family of machine learning (ML) methods that learn to reinforcement learn. That is, meta-RL methods use sample-inefficient ML to learn sample-efficient RL algorithms, or components thereof. Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible. In this monograph, the meta-RL problem setting is described in detail as well as its major variations. At a high level the book discusses how meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task. Using these clusters, the meta-RL algorithms and applications are surveyed. The monograph concludes by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)