zoukankan      html  css  js  c++  java
  • The Backpropagation Algorithm

    https://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

    7.1 Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems — the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems.

    The optimization algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function. The resulting error value is calculated for each of the neurons in the output layer. The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.  Backpropagation uses these error values to calculate the gradient of the loss function. In the second phas

    e, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.

  • 相关阅读:
    页面中多个小图片元素合成一个大图片之后用CSS调用
    腾讯设计中心博客
    php 配置 curl , gd , openssl , mbstring
    Apache开启Rewrite环境
    防止入侵:My SQL各种攻击方法大全
    Css背景图合并工具功能增强
    php防CC攻击代码
    网站地址栏的图标代码
    PHP漏洞全解(一)PHP网页的安全性问题
    用PHP实现飞信api接口发飞信短信
  • 原文地址:https://www.cnblogs.com/rsapaper/p/6269463.html
Copyright © 2011-2022 走看看