zoukankan      html  css  js  c++  java
  • 正则化——逻辑回归

    逻辑回归的代价函数为

    [Jleft( heta  ight) =  - left[ {frac{1}{m}sumlimits_{i = 1}^m {{y^{left( i ight)}}log {h_ heta }left( {{x^{left( i ight)}}} ight) + left( {1 - {y^{left( i ight)}}} ight)log left( {1 - {h_ heta }left( {{x^{left( i ight)}}} ight)} ight)} } ight]]

    正则化后

    [Jleft( heta  ight) =  - left[ {frac{1}{m}sumlimits_{i = 1}^m {{y^{left( i ight)}}log {h_ heta }left( {{x^{left( i ight)}}} ight) + left( {1 - {y^{left( i ight)}}} ight)log left( {1 - {h_ heta }left( {{x^{left( i ight)}}} ight)} ight)} } ight] + frac{lambda }{{2m}}sumlimits_{j = 1}^n { heta _j^2} ]

    此时梯度下降算法为

    重复{

    [{ heta _0}: = { heta _0} - alpha left[ {frac{1}{m}sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)x_0^{left( i ight)}} } ight]]

    [{ heta _j}: = { heta _j} - alpha left[ {frac{1}{m}sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)x_j^{left( i ight)} + frac{lambda }{m}{ heta _j}} } ight]left( {j = 1,2,...,n} ight)]

    }

    (注意:区分逻辑回归与线性回归的h(x))

  • 相关阅读:
    最终作业
    第十二次作业
    Beta 冲刺(7/7)
    Beta 冲刺(6/7)
    Beta 冲刺(5/7)
    Beta 冲刺(4/7)
    Beta 冲刺(3/7)
    Beta 冲刺(2/7)
    Beta 冲刺(1/7)
    福大软工 · 第十次作业
  • 原文地址:https://www.cnblogs.com/qkloveslife/p/9866515.html
Copyright © 2011-2022 走看看