zoukankan      html  css  js  c++  java
  • Training Very Deep Networks

    Rupesh Kumar Srivastava
    Klaus Greff
     ̈
    J urgen
    Schmidhuber
    The Swiss AI Lab IDSIA / USI / SUPSI
    {rupesh, klaus, juergen}@idsia.ch

    Abstract
    Theoretical and empirical evidence indicates that the depth of neural networks
    is crucial for their success. However, training becomes more difficult as depth
    increases, and training of very deep networks remains an open problem. Here we
    introduce a new architecture designed to overcome this. Our so-called highway
    networks allow unimpeded information flow across many layers on information
    highways. They are inspired by Long Short-Term Memory recurrent networks and
    use adaptive gating units to regulate the information flow. Even with hundreds of
    layers, highway networks can be trained directly through simple gradient descent.
    This enables the study of extremely deep and efficient architectures.

    1
    Introduction & Previous Work
    Many recent empirical breakthroughs in supervised machine learning have been achieved through
    large and deep neural networks. Network depth (the number of successive computational layers) has
    played perhaps the most important role in these successes. For instance, within just a few years, the
    top-5 image classification accuracy on the 1000-class ImageNet dataset has increased from ∼84%
    [1] to ∼95% [2, 3] using deeper networks with rather small receptive fields [4, 5]. Other results on
    practical machine learning problems have also underscored the superiority of deeper networks [6]
    in terms of accuracy and/or performance.
    In fact, deep networks can represent certain function classes far more efficiently than shallow ones.
    This is perhaps most obvious for recurrent nets, the deepest of them all. For example, the n bit
    parity problem can in principle be learned by a large feedforward net with n binary input units, 1
    output unit, and a single but large hidden layer. But the natural solution for arbitrary n is a recurrent
    net with only 3 units and 5 weights, reading the input bit string one bit at a time, making a single
    recurrent hidden unit flip its state whenever a new 1 is observed [7]. Related observations hold for
    Boolean circuits [8, 9] and modern neural networks [10, 11, 12].

  • 相关阅读:
    Python多态
    python 元类与定制元类
    Python:类属性,实例属性,私有属性与静态方法,类方法,实例方法
    Python 类 --基础与要点
    服务器错误401
    H5中使用Web Storage来存储结构化数据
    Web Storage
    定义表单控件的id和name注意点
    Annotation(注解)代替配置文件
    jQuery 特殊选择器this
  • 原文地址:https://www.cnblogs.com/2008nmj/p/9119534.html
Copyright © 2011-2022 走看看