176,39 €
195,99 €
-10% with code: EXTRA
Learning in Cooperative Multi-Agent Systems
Learning in Cooperative Multi-Agent Systems
176,39
195,99 €
  • We will send in 10–14 business days.
In a distributed system, a number of individually acting agents coexist. In order to achieve a common goal, coordinated cooperation between the agents is crucial. Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. Multi-agent reinforcement learning (RL) methods allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behav…
  • SAVE -10% with code: EXTRA

Learning in Cooperative Multi-Agent Systems (e-book) (used book) | bookbook.eu

Reviews

Description

In a distributed system, a number of individually acting agents coexist. In order to achieve a common goal, coordinated cooperation between the agents is crucial. Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. Multi-agent reinforcement learning (RL) methods allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system. However, the decentralization of the control and observation of the system among independent agents has a significant impact on problem complexity. The author Thomas Gabel addresses the intricacy of learning and acting in multi-agent systems by two complementary approaches. He identifies a subclass of general decentralized decision-making problems that features provably reduced complexity. Moreover, he presents various novel model-free multi-agent RL algorithms that are capable of quickly obtaining approximate solutions in the vicinity of the optimum. All algorithms proposed are evaluated in the scope of various established scheduling benchmark problems.

EXTRA 10 % discount with code: EXTRA

176,39
195,99 €
We will send in 10–14 business days.

The promotion ends in 20d.23:48:33

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 1,96 Book Euros!?

In a distributed system, a number of individually acting agents coexist. In order to achieve a common goal, coordinated cooperation between the agents is crucial. Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. Multi-agent reinforcement learning (RL) methods allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system. However, the decentralization of the control and observation of the system among independent agents has a significant impact on problem complexity. The author Thomas Gabel addresses the intricacy of learning and acting in multi-agent systems by two complementary approaches. He identifies a subclass of general decentralized decision-making problems that features provably reduced complexity. Moreover, he presents various novel model-free multi-agent RL algorithms that are capable of quickly obtaining approximate solutions in the vicinity of the optimum. All algorithms proposed are evaluated in the scope of various established scheduling benchmark problems.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)