zoukankan      html  css  js  c++  java
  • The Backpropagation Algorithm

    https://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

    7.1 Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems — the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems.

    The optimization algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function. The resulting error value is calculated for each of the neurons in the output layer. The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.  Backpropagation uses these error values to calculate the gradient of the loss function. In the second phas

    e, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.

  • 相关阅读:
    【译】.NET Core 3.0 中的新变化
    【译】最大限度地降低多线程 C# 代码的复杂性
    【wif系列】C#之单例模式(Singleton Pattern)最佳实践
    【译】在C#中实现单例模式
    【译】.NET 跨平台界面框架和为什么你首先要考虑再三
    WPF自定义空心文字
    WPF捕获未处理的异常
    C# 中 SQLite 使用介绍
    C# WebService动态调用
    Java实现将中文转成拼音和ASCII码
  • 原文地址:https://www.cnblogs.com/rsapaper/p/6269463.html
Copyright © 2011-2022 走看看