115,55 €
128,39 €
-10% with code: EXTRA
Learning to Co-operate in Multi-Agent Systems Experiments with the RoboCup Simulator
Learning to Co-operate in Multi-Agent Systems Experiments with the RoboCup Simulator
115,55
128,39 €
  • We will send in 10–14 business days.
In recent years, two major areas of computer science have started converging. Artificial intelligence research is moving towards realistic domains requiring real-time responses, and real-time systems are moving towards more complex applications requiring intelligent behaviour. This book addresses the question of whether agents can learn to become individually skilled and also learn to co-operate in the presence of both teammates and adversaries in a complex, real-time, noisy environment with no…
  • Publisher:
  • ISBN-10: 3639102592
  • ISBN-13: 9783639102598
  • Format: 15.2 x 22.9 x 0.8 cm, softcover
  • Language: English
  • SAVE -10% with code: EXTRA

Learning to Co-operate in Multi-Agent Systems Experiments with the RoboCup Simulator (e-book) (used book) | bookbook.eu

Reviews

Description

In recent years, two major areas of computer science have started converging. Artificial intelligence research is moving towards realistic domains requiring real-time responses, and real-time systems are moving towards more complex applications requiring intelligent behaviour. This book addresses the question of whether agents can learn to become individually skilled and also learn to co-operate in the presence of both teammates and adversaries in a complex, real-time, noisy environment with no communication. To answer this question this work starts by presenting a multi-threaded agent architecture capable of dealing with the logical and timing challenges of such an environment. The decision making process is broken down into simple modules that link together an agent's perception to its actions. The book demonstrates how a sparse distributed memory model can be used as a generalisation component for tasks that involve large state spaces. It further demonstrates how reinforcement learning can be linked to such a memory model and produce intelligent action. Experimental results demonstrate how a learned policy can outperform fixed, hand-coded ones.

EXTRA 10 % discount with code: EXTRA

115,55
128,39 €
We will send in 10–14 business days.

The promotion ends in 19d.03:29:22

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 1,28 Book Euros!?
  • Author: Kostas Kostiadis
  • Publisher:
  • ISBN-10: 3639102592
  • ISBN-13: 9783639102598
  • Format: 15.2 x 22.9 x 0.8 cm, softcover
  • Language: English English

In recent years, two major areas of computer science have started converging. Artificial intelligence research is moving towards realistic domains requiring real-time responses, and real-time systems are moving towards more complex applications requiring intelligent behaviour. This book addresses the question of whether agents can learn to become individually skilled and also learn to co-operate in the presence of both teammates and adversaries in a complex, real-time, noisy environment with no communication. To answer this question this work starts by presenting a multi-threaded agent architecture capable of dealing with the logical and timing challenges of such an environment. The decision making process is broken down into simple modules that link together an agent's perception to its actions. The book demonstrates how a sparse distributed memory model can be used as a generalisation component for tasks that involve large state spaces. It further demonstrates how reinforcement learning can be linked to such a memory model and produce intelligent action. Experimental results demonstrate how a learned policy can outperform fixed, hand-coded ones.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)