115,28 €
128,09 €
-10% with code: EXTRA
Humanoid robot control policy and interaction design. A study on simulation to machine deployment
Humanoid robot control policy and interaction design. A study on simulation to machine deployment
115,28
128,09 €
  • We will send in 10–14 business days.
Technical Report from the year 2019 in the subject Engineering - Robotics, grade: 9, language: English, abstract: Robotic agents can be made to learn various tasks through simulating many years of robotic interaction with the environment which cannot be made in case of real robots. With the abundance of a large amount of replay data and the increasing fidelity of simulators to implement complex physical interaction between the robots and the environment, we can make them learn various tasks tha…
  • Publisher:
  • Year: 2019
  • Pages: 104
  • ISBN-10: 3668993459
  • ISBN-13: 9783668993457
  • Format: 14.8 x 21 x 0.6 cm, softcover
  • Language: English
  • SAVE -10% with code: EXTRA

Humanoid robot control policy and interaction design. A study on simulation to machine deployment (e-book) (used book) | bookbook.eu

Reviews

Description

Technical Report from the year 2019 in the subject Engineering - Robotics, grade: 9, language: English, abstract: Robotic agents can be made to learn various tasks through simulating many years of robotic interaction with the environment which cannot be made in case of real robots. With the abundance of a large amount of replay data and the increasing fidelity of simulators to implement complex physical interaction between the robots and the environment, we can make them learn various tasks that would require a lifetime to master. But, the real benefits of such training are only feasible, if it is transferable to the real machines. Although simulations are an effective environment for training agents, as they provide a safe manner to test and train agents, often in robotics, the policies trained in simulation do not transfer well to the real world. This difficulty is compounded by the fact that oftentimes the optimization algorithms based on deep learning exploit simulator flaws to cheat the simulator in order to reap better reward values. Therefore, we would like to apply some commonly used reinforcement learning algorithms to train a simulated agent modelled on the Aldebaran NAO humanoid robot. The problem of transferring the simulated experience to real life is called the reality gap. In order to bridge the reality gap between the simulated and real agents, we employ a Difference model which will learn the difference between the state distributions of the real and simulated agents. The robot is trained on two basic tasks of navigation and bipedal walking. Deep Reinforcement Learning algorithms such as Deep Q-Networks (DQN) and Deep Deterministic Policy Gradients(DDPG) are used to achieve proficiency in these tasks. We then evaluate the performance of the learned policies and transfer them to a real robot using a Difference model based on an addition to the DDPG algorithm.

EXTRA 10 % discount with code: EXTRA

115,28
128,09 €
We will send in 10–14 business days.

The promotion ends in 19d.22:58:21

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 1,28 Book Euros!?
  • Author: Suman Deb
  • Publisher:
  • Year: 2019
  • Pages: 104
  • ISBN-10: 3668993459
  • ISBN-13: 9783668993457
  • Format: 14.8 x 21 x 0.6 cm, softcover
  • Language: English English

Technical Report from the year 2019 in the subject Engineering - Robotics, grade: 9, language: English, abstract: Robotic agents can be made to learn various tasks through simulating many years of robotic interaction with the environment which cannot be made in case of real robots. With the abundance of a large amount of replay data and the increasing fidelity of simulators to implement complex physical interaction between the robots and the environment, we can make them learn various tasks that would require a lifetime to master. But, the real benefits of such training are only feasible, if it is transferable to the real machines. Although simulations are an effective environment for training agents, as they provide a safe manner to test and train agents, often in robotics, the policies trained in simulation do not transfer well to the real world. This difficulty is compounded by the fact that oftentimes the optimization algorithms based on deep learning exploit simulator flaws to cheat the simulator in order to reap better reward values. Therefore, we would like to apply some commonly used reinforcement learning algorithms to train a simulated agent modelled on the Aldebaran NAO humanoid robot. The problem of transferring the simulated experience to real life is called the reality gap. In order to bridge the reality gap between the simulated and real agents, we employ a Difference model which will learn the difference between the state distributions of the real and simulated agents. The robot is trained on two basic tasks of navigation and bipedal walking. Deep Reinforcement Learning algorithms such as Deep Q-Networks (DQN) and Deep Deterministic Policy Gradients(DDPG) are used to achieve proficiency in these tasks. We then evaluate the performance of the learned policies and transfer them to a real robot using a Difference model based on an addition to the DDPG algorithm.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)