zoukankan      html  css  js  c++  java
  • The Backpropagation Algorithm

    https://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

    7.1 Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems — the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems.

    The optimization algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function. The resulting error value is calculated for each of the neurons in the output layer. The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.  Backpropagation uses these error values to calculate the gradient of the loss function. In the second phas

    e, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.

  • 相关阅读:
    C# Dictionary几种遍历方式
    Android 代码中文字在手机上显示乱码问题解决方法
    【Android学习5】Clean 之后R文件丢失
    【android学习4】Eclipse中Clean作用
    Java基础语法(练习)
    Java基础语法(自定义类、ArrayList集合)
    Java基础语法(方法)
    Java基础语法(数组)
    Java基础(Scanner、Random、流程控制语句)
    Java基础(变量、运算符)
  • 原文地址:https://www.cnblogs.com/rsapaper/p/6269463.html
Copyright © 2011-2022 走看看