zoukankan      html  css  js  c++  java
  • 单层和双层神经网络反向传播公式推导(从矩阵求导的角度)

    最近在跟着Andrew Ng老师学习深度神经网络.在学习浅层神经网络(两层)的时候,推导反向传播公式遇到了一些困惑,网上没有找到系统推导的过程.后来通过学习矩阵求导相关技巧,终于搞清楚了.首先从最简单的logistics回归(单层神经网络)开始.

    logistics regression中的梯度下降法

    单训练样本的logistics regression

    输入训练样本为(x),网络权重为(w)(b),其中(x)为列向量,向量维度为((n_0,1)),(w)为行向量,向量维度为((1,n_0)),(b)为标量.则神经网络的输出为$$a = sigma(z),z = wx + b$$其中,(sigma())函数为sigmoid函数,其定义为$$sigma(x) = frac{1}{1+e^{-x}}$$
    网络的loss函数定义为:$$ l(a) = -(yloga+(1-y)log(1-a))$$
    其中,(y)为训练样本标签,对于logistics regression(y = 0/1).

    1. 下面首先求解(frac{partial l}{partial z}):

    [egin{aligned} frac{partial l}{partial z} &= -(ylogsigma(z)+(1-y)log(1-sigma(z)))^{'}\ &= -{ylog^{'}sigma(z)sigma^{'}(z)+(1-y)log^{'}(1-sigma(z))(-sigma^{'}(z))}\ &= -{yfrac{1}{sigma(z)}sigma(z)(1-sigma(z))-(1-y)frac{1}{1-sigma(z)}sigma(z)(1-sigma(z))}\ &= -{y(1-sigma(z))-(1-y)sigma(z)}\ &= sigma(z)-y \ &= a-y end{aligned}]

    1. 下面求解(frac{partial l}{partial w}):
      由于w为行向量,上面的求导为标量对向量的求导.可以按照标量对向量求导的定义来计算,即$$frac{partial l}{partial w} = [frac{partial l}{partial w_1},frac{partial l}{partial w_2},...,frac{partial l}{partial w_{n_0}}]$$当然此处可以利用标量求导的链式法则,将(frac{partial l}{partial w_i} = frac{partial l}{partial z}frac{partial z}{partial w_i})带入进行计算.
      但是,为了与后续向量化实现和两层神经网络的求导相一致,此处利用矩阵求导的法则进行计算,虽然有杀鸡用牛刀的嫌疑.首先明确一点,标量的链式求导法则并不适用于向量,不能相当然的套用,我就是犯了这个错误,在自己推导公式时百思不得其解.但是矩阵求导也有类似与标量的链式法则,下面直接给出公式:

    [dl = tr(frac{partial l^{T}}{partial W}dW) ]

    其中,dl指的是标量l的微分,W为矩阵,tr为迹运算.若dl,dW能满足上面这种形式,则dW前面部分就是标量l对矩阵W的导数.此处简单的举个例子:
    (f = a^{T}Xb),(f)为标量,(a,b)为列向量,(X)为矩阵,求(frac{partial f}{partial X}),解答过程如下:

    [egin{aligned} df &= da^{T}Xb\ &= a^{T}dXb\ &= tr(a^{T}dXb)\ &= tr(ba^{T}dX)\ &= tr((ab^{T})^{T})dX \ end{aligned}]

    对照上面的公式,可得(frac{partial f}{partial X} = ab^{T}).上面的推导过程用了部分矩阵微分公式如(d(XY) = dXY+XdY),还包括迹运算的技巧,如(tr(ABC) = tr(CAB) = tr(BAC)),更详细的关于矩阵求导的内容请参考博主叠加态的猫
    3. 下面求解(frac{partial l}{partial b}):
    由于b为标量,可以简单求得(frac{partial l}{partial b} = frac{partial l}{partial z})

    m个训练样本的logistics regression向量化实现

    单次输入的训练样本是(X),(X)为矩阵,维度为((n_{0},m)).网络权重为(oldsymbol{w})(b),(oldsymbol{w})为行向量,向量维度为((1,n_0)),(oldsymbol{b}=overrightarrow{1}^{T}b).则神经网络的输出为$$oldsymbol{a} = sigma(oldsymbol{z}),oldsymbol{z} = oldsymbol{w}X + oldsymbol{b}$$|
    (oldsymbol{z},oldsymbol{a})均为行向量,维度为((1,m)).cost函数定义为:

    [J(oldsymbol{a})=-frac{1}{m}sum_{i=1}^{m}l(a_i) ]

    也可以定义为矩阵的形式:

    [J(oldsymbol{a})=-frac{1}{m}[oldsymbol{y}logoldsymbol{a}^{T}+(overrightarrow{1}^{T}-oldsymbol{y})log(overrightarrow{1}-oldsymbol{y}^{T})] ]

    (overrightarrow{1})为全为1的列向量

    1. 下面首先求解(frac{partial J}{partial oldsymbol{z}}):
      不管通过标量对向量求导的定义,或者利用矩阵"链式法则"都能求得:

    [frac{partial J}{partial oldsymbol{z}}=frac{1}{m}(oldsymbol{a}-oldsymbol{z}) ]

    注意此处J对z的导数与Andrew Ng老师的结果有点区别,多了一个(frac{1}{m}),私以为严格按照求导公式,(frac{1}{m})是该有的,虽然Andrew Ng老师在dw,db前加上了(frac{1}{m}),所以对最终的迭代并无影响.
    2. 下面求解(frac{partial J}{partial oldsymbol{w}}):

    [egin{aligned} doldsymbol{z} &= d(oldsymbol{w}X+oldsymbol{b})\ &= doldsymbol{w}X\ end{aligned}]

    已知(dJ=tr(frac{partial J^{T}}{partial oldsymbol{z}}doldsymbol{z})),将上式带入可得:

    [egin{aligned} dJ &= tr(frac{partial J^{T}}{partial oldsymbol{z}}doldsymbol{w}X)\ &= tr(frac{Xpartial J^{T}}{partial oldsymbol{z}}doldsymbol{w})\ &= tr((frac{partial J}{partial oldsymbol{z}}X^{T})^{T}doldsymbol{w})\ end{aligned}]

    因此,(frac{partial J}{partial oldsymbol{w}}=frac{partial J}{partial oldsymbol{z}}X^{T})
    3. 下面求解(frac{partial J}{partial b}):

    [egin{aligned} dJ &= tr(frac{partial J^{T}}{partial oldsymbol{z}}doldsymbol{b})\ &= tr(frac{partial J^{T}}{partial oldsymbol{z}}overrightarrow{1}^{T}db)\ &= tr(overrightarrow{1}^{T}frac{partial J^{T}}{partial oldsymbol{z}})db\ &= tr(frac{partial J}{partial oldsymbol{z}}overrightarrow{1})^{T}db\ end{aligned}]

    因此,(frac{partial J}{partial b}=frac{partial J}{partial oldsymbol{z}}overrightarrow{1})

    双层神经网络中的梯度下降法

    神经网络的输入,隐含层,输出层神经元个数分别为(n_0,n_1,n_2=1),其中隐含层激活函数为(g()),参数为(W_1,oldsymbol{b_1}),(W_1)为矩阵,维度((n_1,n_0)),(oldsymbol{b_1})为列向量,维度((n_1,1)).输出层激活函数选择sigmoid函数,参数为(oldsymbol{w_2},b_2),(oldsymbol{w_2})为行向量,维度为((n_1,1)),(b_2)为标量.

    单个训练样本推导

    输入(oldsymbol{x}),则网络的正向传递过程如下:

    [egin{aligned} oldsymbol{z_1}&=W_1oldsymbol{x}+oldsymbol{b_1}\ oldsymbol{a_1}&=g(oldsymbol{z_1})\ z_2&=oldsymbol{w_2}oldsymbol{a_1}+b_2\ a_2&=sigma(z_2)\ end{aligned}]

    loss函数定义与logistics regression相同

    1. 首先求解(frac{partial l}{partial z_2}):
      与logistics regression方式相同,可得(frac{partial l}{partial z_2}=a_2-y)
    2. 下面求解(frac{partial l}{partial oldsymbol{w_2}}):
      与logistics regression方式相同,可得(frac{partial l}{partial oldsymbol{w_2}}=frac{partial l}{partial z_2}oldsymbol{a_1}^{T})
    3. 相同方式可求解(frac{partial l}{partial b_2}=frac{partial l}{partial z_2})
    4. 求解(frac{partial l}{partial oldsymbol{z_1}}):

    [egin{aligned} dl &= tr(frac{partial l^{T}}{partial z_2}d(oldsymbol{w_2}g(oldsymbol{z_1})+b_2)\ &= tr(frac{partial l^{T}}{partial z_2}oldsymbol{w_2}dg(oldsymbol{z_1}))\ &= tr(frac{partial l^{T}}{partial z_2}oldsymbol{w_2}(g^{'}(oldsymbol{z_1})*doldsymbol{z_1}))\ &= tr((oldsymbol{w_2}^{T}frac{partial l}{partial z_2})^{T}(g^{'}(oldsymbol{z_1})*doldsymbol{z_1}))\ &= tr((oldsymbol{w_2}^{T}frac{partial l}{partial z_2}*g^{'}(oldsymbol{z_1}))^{T}doldsymbol{z_1})\ end{aligned}]

    因此,(frac{partial l}{partial oldsymbol{z_1}}=oldsymbol{w_2}^{T}frac{partial l}{partial z_2}*g^{'}(oldsymbol{z_1})),其中*为逐元素相乘,上面公式推导过程中运用了迹的性质,(tr(A^{T}(B*C))=tr((A*B)^{T}C))
    5. 求解(frac{partial l}{partial W_1}):

    [egin{aligned} dl &= tr(frac{partial l^{T}}{partial oldsymbol{z_1}}doldsymbol{z_1})\ &= tr(frac{partial l^{T}}{partial oldsymbol{z_1}}dW_1oldsymbol{x})\ &= tr(oldsymbol{x}frac{partial l^{T}}{partial oldsymbol{z_1}}dW_1)\ &= tr((frac{partial l}{partial oldsymbol{z_1}}oldsymbol{x}^{T})^{T}dW_1)\ end{aligned}]

    因此,(frac{partial l}{partial W_1}=frac{partial l}{partial oldsymbol{z_1}}oldsymbol{x}^{T})
    6. 求解(frac{partial l}{partial oldsymbol{b_1}}):

    [egin{aligned} dl &= tr(frac{partial l^{T}}{partial oldsymbol{z_1}}doldsymbol{z_1})\ &= tr(frac{partial l^{T}}{partial oldsymbol{z_1}}doldsymbol{b_1})\ end{aligned}]

    因此,(frac{partial l}{partial oldsymbol{b_1}}=frac{partial J}{partial oldsymbol{z_1}})

    m个训练样本向量化实现的推导

    输入(X),(X)为矩阵,维度为((n_1,m))则网络的正向传递过程如下:

    [egin{aligned} Z_1&=W_1X+oldsymbol{b_1}overrightarrow{1}^{T}\ A_1&=g(Z_1)\ oldsymbol{z_2}&=oldsymbol{w_2}A_1+b_2overrightarrow{1}^{T}\ oldsymbol{a_2}&=sigma(oldsymbol{z_2})\ end{aligned}]

    1. 下面首先求解(frac{partial J}{partial oldsymbol{z_2}}):
      与logistics regression中方法相同,可得(frac{partial J}{partial oldsymbol{z_2}}=frac{1}{m}(oldsymbol{a_2}-oldsymbol{Y}))
    2. 下面求解(frac{partial J}{partial oldsymbol{w_2}}):与logistics regression中方法相同,可得(frac{partial J}{partial oldsymbol{w_2}}=frac{partial J}{partial oldsymbol{z_2}}oldsymbol{a_1}^{T})
    3. 下面求解(frac{partial J}{partial b_2}):同logistics regression可得(frac{partial J}{partial b_2}=frac{partial J}{partial oldsymbol{z_2}}overrightarrow{1})
    4. 下面求解(frac{partial J}{partial Z_1}):
      与单个训练样本方法相同,可得(frac{partial J}{partial Z_1}=oldsymbol{w_2}^{T}frac{partial J}{partial oldsymbol{z_2}}*g^{'}(oldsymbol{z_1}))
    5. 求解(frac{partial J}{partial W_1}):
      与单个训练样本方法相同,可得(frac{partial J}{partial W_1}=frac{partial J}{partial Z_1}oldsymbol{X}^{T})
    6. 求解(frac{partial J}{partial oldsymbol{b_1}}):

    [egin{aligned} dJ &= tr(frac{partial J^{T}}{partial Z_1}doldsymbol{b_1}overrightarrow{1}^{T})\ &= tr(overrightarrow{1}^{T}frac{partial J^{T}}{partial Z_1}doldsymbol{b_1})\ &= tr((frac{partial J}{partial Z_1}overrightarrow{1})^{T}doldsymbol{b_1})\ end{aligned}]

    因此,(frac{partial J}{partial oldsymbol{b_1}}=frac{partial J}{partial Z_1}overrightarrow{1})

  • 相关阅读:
    DOM
    链接后加"/"与不加"/"的区别
    Tomcat启动脚本catalina.sh
    MVC 之AjaxHelper
    在MVC中使用async和await的说明
    禁用Flash P2P上传
    基于SpringBoot开发一个Restful服务,实现增删改查功能
    JavaScript学习总结
    Spring MVC 学习总结
    JS 将对象转换成字符 字符串转换成json对象
  • 原文地址:https://www.cnblogs.com/hello-ai/p/10885202.html
Copyright © 2011-2022 走看看