zoukankan      html  css  js  c++  java
  • Random Thoughts on Deep Reinforcement Learning

    About model-based and model-free

    • Model-free methods cannot be the future of reinforcement learnig, even though these algorithms perform better than model-based methods at the present time. The fatal flaw lies in the lack of interpretability. We cannot trust the policy without knowing why it takes a specific action, especially since it always takes some actions that are stupid and obviously wrong in our view. Model-based methods relieve our concerns to some extent, because we can get some knowledge about future states and outcomes. However, the model should be learned in most of the time and it cannot be accurate like the real environment. A way we can solve it must be planning methods especially tree search methods like Monte Carlo Tree Search (MCTS). Tree search methods can reduce the variance of the learned model using bootstrapping at each node, which is something like TD methods. It also presents us with better interpretability which is very critical.

    • Another thing is about the generalization. My idea is that the generalization of a learned model is better than a policy. When we learn a policy in an environment and apply it to another one, it will collapse because usually the policy is overfitted about the environment and any wrong actions in an trajectory can mess up the whole policy. But if we learn a model in an environment and uses it to predict in an similar environment, it usually performs well because it is just a case of supervised learning and some data augmentation methods can be easily applied. So, in my view, model-based methods combine with tree search methods can improve the interpretability and generalization simultaneously.

  • 相关阅读:
    JQuery源码解读 JQ框架简化( 妙味讲堂
    Mia Fringe官网会员须知
    require.js
    :before与::before的区别
    css----苹果移动端以及小程序滚动模块卡顿的处理
    Vue使用国密SM4加密
    vue + echarts + echarts-gl 实现3D 环状图
    React Hook 初学
    常用阻止默认行为的两种方式
    理解事件触发,事件捕获,事件冒泡
  • 原文地址:https://www.cnblogs.com/initial-h/p/12208038.html
Copyright © 2011-2022 走看看