zoukankan      html  css  js  c++  java
  • Optimal Value Functions and Optimal Policy

    Optimal Value Function is how much reward the best policy can get from a state s, which is the best senario given state s. It can be defined as:

    Value Function and Optimal State-Value Function

    Let's see firstly compare Value Function with Optimal Value Function. For example, in the student study case, the value function for the blue circle state under 50:50 policy is 7.4.

    However, when we consider the Optimal State-Value function, 'branches' that may prevent us from getting the best scores are proned. For instance, the optimal senario for the blue circle state is having 100% probability to continue his study rather than going to pub.

    Optimal Action-Value Function

    Then we move to Action-Value Function, and the following equation also reveals the Optimal Action-Value Function is from the policy who gives the best Action Returns. 

    The Optimal Action-Value Function is strongly related to Optimal State-Value Function by:

    The equation means when action a is taken at state s, what the best return is. At this condition, the probability of reaching each state and the immediate reward is determined, so the only variable is the State-Value function . Therefore it is obvious that obtaining the Optimal State-Value function is equivalent to holding the Optimal Action-Value Function.

    Conversely, the Optimal State-Value function is the best combination of Action and the following states with Optimal State-value Functions:

    Still in the student example, when we know the Optimal State-Value Function, the Optimal Action-Value Function can be calculated as:

    Finally we can derive the best policy from the Optimal Action-Value Function:

    This means the policy only picks up the best action at every state rather than having a probability distribution. This deterministic policy is the goal of Reinforcement Learning, as it will guide the action to complete the task.

  • 相关阅读:
    Springboot vue 前后分离 跨域 Activiti6 工作流 集成代码生成器 shiro权限
    mybatis3批量更新 批量插入
    Java GC的工作原理详解
    Hadoop的Map侧join
    cut 命令
    head 与 tail
    常用正则
    vim 设置
    Java泛型初探
    linux修改PS1,自定义命令提示符样式
  • 原文地址:https://www.cnblogs.com/rhyswang/p/11155907.html
Copyright © 2011-2022 走看看