zoukankan      html  css  js  c++  java
  • Learning to Explore in Motion and Interaction Tasks

    张宁 Learning to Explore in Motion and Interaction Tasks

    学习探索运动和互动任务

    Miroslav Bogdanovic1 and Ludovic Righetti1,2
    https://arxiv.org/abs/1908.03731

    Learning to Explore in Motion and Interaction Tasks

    Model free reinforcement learning suffers from the high sampling complexity inherent to robotic manipulation or locomotion tasks. Most successful approaches typically use random sampling strategies which leads to slow policy convergence. In this paper we present a novel approach for efficient exploration that leverages previously learned tasks. We exploit the fact that the same system is used across many tasks and build a generative model for exploration based on data from previously solved tasks to improve learning new tasks. The approach also enables continuous learning of improved exploration strategies as novel tasks are learned. Extensive simulations on a robot manipulator performing a variety of motion and contact interaction tasks demonstrate the capabilities of the approach. In particular, our experiments suggest that the exploration strategy can more than double learning speed, especially when rewards are sparse. Moreover, thealgorithmisrobusttotaskvariationsandparametertuning, making it beneficial for complex robotic problems.
    自由模型的强化学习受到机器人操纵或运动任务固有的高采样复杂性的困扰。大多数成功的方法通常使用随机抽样策略,这会导致政策收敛缓慢。在本文中,我们提出了一种利用以前学过的任务进行有效探索的新颖方法。我们利用在多个任务中使用同一系统这一事实,并基于先前解决的任务中的数据构建了用于探索的生成模型,以改善对新任务的学习。随着学习新任务,该方法还能够不断学习改进的勘探策略。在执行各种运动和接触交互任务的机器人操纵器上的大量仿真证明了该方法的功能。特别是,我们的实验表明,探索策略可以使学习速度提高一倍以上,尤其是在奖励稀少的情况下。此外,该算法对于任务变化和参数调整具有鲁棒性,使其对于复杂的机器人问题非常有益。

  • 相关阅读:
    Runloop 新的看法
    如何利用openCV做灰度图片
    WebViewJavascriptBridge使用说明(iOS)
    页面滑动返回和点击返回按钮动作实现;
    获取设备UDID、IMEI、ICCID、序列号、Mac地址等信息
    设计模式----单例模式
    多线程理论知识 -- 小白的教程
    SQLite 的创建与编辑
    strong,weak, retain, assign的区别
    CGContextRef 画线简单用法
  • 原文地址:https://www.cnblogs.com/feifanrensheng/p/12390304.html
Copyright © 2011-2022 走看看