172,52 €
191,69 €
-10% with code: EXTRA
Introduction to Multi-Armed Bandits
Introduction to Multi-Armed Bandits
172,52
191,69 €
  • We will send in 10–14 business days.
This book gives a broad and accessible introduction to multi-armed bandits, a rich, multi-disciplinary area of increasing importance. The material is teachable by design: each chapter corresponds to one week of a course. There are no prerequisites other than a certain level of mathematical maturity, roughly corresponding to the basic undergraduate course on algorithms. Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An eno…
191.69
  • SAVE -10% with code: EXTRA

Introduction to Multi-Armed Bandits (e-book) (used book) | bookbook.eu

Reviews

Description

This book gives a broad and accessible introduction to multi-armed bandits, a rich, multi-disciplinary area of increasing importance. The material is teachable by design: each chapter corresponds to one week of a course. There are no prerequisites other than a certain level of mathematical maturity, roughly corresponding to the basic undergraduate course on algorithms. Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous, multi-dimensional body of work has accumulated over the years. How to present this work, let alone make it teachable? The book partitions it into a dozen or so big directions. Each chapter handles one direction, covers the first-order concepts and results on a technical level, and provides a detailed literature review for further exploration. While most of the book is on learning theory, the last three chapters cover various connections to economics and operations research. The book aims to convey that multi-armed bandits are both deeply theoretical and deeply practical. Apart from all the math, the book is careful about motivation, and discusses the practical aspects in considerable detail (based on the system for contextual bandits developed at Microsoft Research). Lecturers can use this book for an introductory course on the subject. Such course would be complementary to graduate-level courses on online convex optimization and reinforcement learning.

EXTRA 10 % discount with code: EXTRA

172,52
191,69 €
We will send in 10–14 business days.

The promotion ends in 21d.23:21:48

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 1,92 Book Euros!?

This book gives a broad and accessible introduction to multi-armed bandits, a rich, multi-disciplinary area of increasing importance. The material is teachable by design: each chapter corresponds to one week of a course. There are no prerequisites other than a certain level of mathematical maturity, roughly corresponding to the basic undergraduate course on algorithms. Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous, multi-dimensional body of work has accumulated over the years. How to present this work, let alone make it teachable? The book partitions it into a dozen or so big directions. Each chapter handles one direction, covers the first-order concepts and results on a technical level, and provides a detailed literature review for further exploration. While most of the book is on learning theory, the last three chapters cover various connections to economics and operations research. The book aims to convey that multi-armed bandits are both deeply theoretical and deeply practical. Apart from all the math, the book is careful about motivation, and discusses the practical aspects in considerable detail (based on the system for contextual bandits developed at Microsoft Research). Lecturers can use this book for an introductory course on the subject. Such course would be complementary to graduate-level courses on online convex optimization and reinforcement learning.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)