zoukankan      html  css  js  c++  java
  • Random Thoughts on Deep Reinforcement Learning

    About model-based and model-free

    • Model-free methods cannot be the future of reinforcement learnig, even though these algorithms perform better than model-based methods at the present time. The fatal flaw lies in the lack of interpretability. We cannot trust the policy without knowing why it takes a specific action, especially since it always takes some actions that are stupid and obviously wrong in our view. Model-based methods relieve our concerns to some extent, because we can get some knowledge about future states and outcomes. However, the model should be learned in most of the time and it cannot be accurate like the real environment. A way we can solve it must be planning methods especially tree search methods like Monte Carlo Tree Search (MCTS). Tree search methods can reduce the variance of the learned model using bootstrapping at each node, which is something like TD methods. It also presents us with better interpretability which is very critical.

    • Another thing is about the generalization. My idea is that the generalization of a learned model is better than a policy. When we learn a policy in an environment and apply it to another one, it will collapse because usually the policy is overfitted about the environment and any wrong actions in an trajectory can mess up the whole policy. But if we learn a model in an environment and uses it to predict in an similar environment, it usually performs well because it is just a case of supervised learning and some data augmentation methods can be easily applied. So, in my view, model-based methods combine with tree search methods can improve the interpretability and generalization simultaneously.

  • 相关阅读:
    树莓派摄像头测试
    mqtt搭建基础教程()
    win10开始图标点击无效
    【python学习笔记:Django】7.数据库模型浅析
    【python学习笔记:Django】6.MySQL那些坑
    Ubuntu分区扩容
    Wine的中文显示与字体设置
    从有序矩阵M x N中找出是否包含某一个数,要求时间复杂度为O(M+N)
    之字形打印矩阵
    双向链表反转
  • 原文地址:https://www.cnblogs.com/initial-h/p/12208038.html
Copyright © 2011-2022 走看看