zoukankan      html  css  js  c++  java
  • The Backpropagation Algorithm

    https://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

    7.1 Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems — the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems.

    The optimization algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function. The resulting error value is calculated for each of the neurons in the output layer. The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.  Backpropagation uses these error values to calculate the gradient of the loss function. In the second phas

    e, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.

  • 相关阅读:
    Forms身份验证
    常见的js图片或内轮换效果
    模仿select选择框
    基于jquery的js幻灯片类
    js弹出幕布遮罩层
    css那些事
    选择珠宝js业务逻辑源码
    aspnet帐号密码改了会出问题
    用rails做了个书评排行网站,欢迎光临!
    转发:为什么函数式编程至关重要
  • 原文地址:https://www.cnblogs.com/rsapaper/p/6269463.html
Copyright © 2011-2022 走看看