zoukankan      html  css  js  c++  java
  • [Machine Learning] Gradient Descent in Practice I

    Feature scaling: it make gradient descent run much faster and converge in a lot fewer other iterations.
     
    Bad cases:
    Good cases:

    We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

    The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:

    −1 ≤ x(i) ≤ 1

    or

    −0.5 ≤ x(i) ≤ 0.5

    These aren't exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.

    Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:

    Example:

    (D)

  • 相关阅读:
    UVA10340
    声明顺序 (Bootstrap 编码规范)
    使用SVN小结
    通过LINQ TO SQL类显示数据库表的数据
    大学初进团队感想
    51NOD:1639-绑鞋带
    Codeforces Round #464 (Div. 2) E. Maximize!
    Codeforces Round #464 (Div. 2) D. Love Rescue
    Codeforces Round #464 (Div. 2) C. Convenient For Everybody
    Codeforces Round #464 (Div. 2) B. Hamster Farm
  • 原文地址:https://www.cnblogs.com/Answer1215/p/13546179.html
Copyright © 2011-2022 走看看