逻辑回归的代价函数
[Jleft( heta ight) = - frac{1}{m}[sumlimits_{i = 1}^m {{y^{left( i ight)}}log left( {{h_ heta }left( {{x^{left( i ight)}}} ight)} ight) + left( {1 - {y^{left( i ight)}}} ight)log left( {1 - {h_ heta }left( {{x^{left( i ight)}}} ight)} ight)]} ]
梯度下降算法
重复{
[{ heta _j}: = { heta _j} - alpha frac{partial }{{partial { heta _j}}}Jleft( heta ight)]
(同步更新所有的tθ)
}
其中
[frac{partial }{{partial { heta _j}}}Jleft( heta ight) = sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)} {x^{left( i ight)}}]
所以梯度下降算法
重复{
[{ heta _j}: = { heta _j} - alpha sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)} {x^{left( i ight)}}]
}
乍一看这和线性回归中的梯度下降算法一样,其实不然
线性回归中[{h_ heta }(x) = { heta ^T}x]
逻辑回归中[{h_ heta }(x) = frac{1}{{1 + {e^{ - { heta ^T}x}}}}]
逻辑回归也需要对数据进行标准化