zoukankan      html  css  js  c++  java
  • Autoencoder

    Autoencoder
    From Wikipedia 

     

    An autoencoder, autoassociator or Diabolo network[1]:19 is an artificial neural network used for learning efficient codings.[2][3] The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. Autoencoder is based on the concept of Sparse coding proposed in a seminal paper by Olshausen et al. [4] in 1996.
    自编码网络(自动编码机),自组织网络或者空竹网络,是一种用来学习高效编码的人工神经网络。自编码网络的目的是为了学习被处理数据集的一种压缩的分布式表示。通常情况下是用来做数据降维。自编码网络的理论是建立在1996年Olshausen等人发表的关于稀疏编码观点的论文上。
    Contents
    1 Overview
    2 Training
    3 References
    4 See also
    5 External links
    Overview
    Architecturally, the simplest form of the autoencoder is a feedforward, non-recurrent neural net that is very similar to the multilayer perceptron (MLP), with an input layer, an output layer and one or more hidden layers connecting them. The difference with the MLP is that in an autoencoder, the output layer has equally many nodes as the input layer, and instead of training it to predict some target value y given inputs x, an autoencoder is trained to reconstruct its own inputs x. I.e., the training algorithm can be summarized as
    从结构上来说,最简单的自编码网络是一个前向无循环网络。这种结构在多层感知机(MLP)中很常见。自编码拥有一个输入层,一个输出层,一层或多层隐含层连接了输入输出层。多层感知机和自编码的不同在于,自编码拥有和输入一样数目的输出结点。自编码的输出即为输入的近似表示,而多层感知机则是通过训练能够对给定的x得到一个目标值y。也就是说,自编码的训练算法可以归纳如下:
    For each input x,
    Do a feed-forward pass to compute activations at all hidden layers, then at the output layer to obtain an output
    Measure the deviation of x̂ from the input x (typically using squared error, i)
    Backpropagate the error through the net and perform weight updates.
    (This algorithm trains one sample at a time, but batch learning is also possible.)

    If the hidden layers are narrower (have fewer nodes) than the input/output layers, then the activations of the final hidden layer can be regarded as a compressed representation of the input. All the usual activation functions from MLPs can be used in autoencoders; if linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an auto-encoder is strongly related to principal component analysis (PCA).[5] When the hidden layers are larger than the input layer, an autoencoder can potentially learn the identity function and become useless; however, experimental results have shown that such autoencoders might still learn useful features in this case.[1]:19
    如果隐层比输入层更窄(也就是含有较少的结点),那么训练后的结点就可以认为是输入数据的一种压缩表示。多层感知机中所有的挤压函数都能被用在自编码网络中,如果使用线性激活函数或者仅仅使用一层Sigmoid隐层,那么一个自编码网络的最优解就是一个强相关的主成分分析器(PCA)。当隐含层结点数多于输入层,那么自编码网络就有可能称为一个无用的Identiti Function(并非一个空函数,而是输出与输入相同的函数),然而实验结果表明,在这种情况下自编码网络仍然能够学习到有用的特征。
    Auto-encoders can also be used to learn overcomplete feature representations of data.[clarification needed][citation needed] They are the precursor to Deep belief networks.[citation needed]
    自编码网络也可以备用来学习数据的过完备特征表示。这是深度信任网络的雏形。
    Training
    An auto-encoder is often trained using one of the many backpropagation variants (conjugate gradient method, steepest descent, etc.). Though often reasonably effective, there are fundamental problems with using backpropagation to train networks with many hidden layers. Once the errors get backpropagated to the first few layers, they are minuscule, and quite ineffectual. This causes the network to almost always learn to reconstruct the average of all the training data.[citation needed] Though more advanced backpropagation methods (such as the conjugate gradient method) help with this to some degree, it still results in very slow learning and poor solutions. This problem is remedied by using initial weights that approximate the final solution. The process to find these initial weights is often called pretraining.
    一个自编码网络尝尝利用多种反向传递的方法变种来训练。尽管这通常是合理有效的,但在利用反向传播的方法训练一个多层网络时,仍然有一些不可避免的问题。当误差通过反向传播到达开始的基层后,常常会变得微不足道进而使效果不那么明显。这导致了网络常常会重构出全部训练数据的平均值。尽管许多现金的反向传播方法(诸如共轭梯度方法)能从一定程度上解决这个问题,但这种情况仍然导致了学习过程的低效和结果的不尽如人意。这个问题可以通过利用近似最终结果的初始权值来不就。通常这个寻找初始权值的过程称为预训练。
    A pretraining technique developed by Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each neighboring set of two layers like a restricted Boltzmann machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.
    由Geoffrey Hinton提出了一种对于多层深度自编码网络的预训练技术,Hinton让临近的网络层参与当前层的训练和调整,去预训练并近似出一个好的结果,类似限制玻尔兹曼机。随后再通过反向传播技术来微调。

  • 相关阅读:
    PAT (Basic Level) Practise:1001. 害死人不偿命的(3n+1)猜想
    流加密法
    The NMEA 0183 Protocol
    USB 描述符
    网摘
    What are the 10 algorithms one must know in order to solve most algorithm challenges/puzzles?
    Why did Jimmy Wales invest in Quora? Is he afraid that it will take over Wikipedia?
    Add Binary
    Cocos2d-x 网络资源
    Cache
  • 原文地址:https://www.cnblogs.com/suanec/p/4805227.html
Copyright © 2011-2022 走看看