zoukankan      html  css  js  c++  java
  • CNN(卷积神经网络)、RNN(循环神经网络)、DNN(深度神经网络)

    CNN(卷积神经网络)、RNN(循环神经网络)、DNN(深度神经网络)的内部网络结构有什么区别?

    DNN以神经网络为载体,重在深度,可以说是一个统称。
    RNN,回归型网络,用于序列数据,并且有了一定的记忆效应,辅之以lstm。
    CNN应该侧重空间映射,图像数据尤为贴合此场景。

    DNN以神经网络为载体,重在深度,可以说是一个统称。
    RNN,回归型网络,用于序列数据,并且有了一定的记忆效应,辅之以lstm。
    CNN应该侧重空间映射,图像数据尤为贴合此场景。

    Stanford University CS231n: Convolutional Neural Networks for Visual Recognition

    作者:肖天睿
    链接:https://www.zhihu.com/question/34681168/answer/84185831
    来源:知乎
    著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

    Artificial neural networks use networks of activation units (hidden units) to map inputs to outputs. The concept of deep learning applied to this model allows the model to have multiple layers of hidden units where we feed output from the previous layers. However, dense connections between the layers is not efficient, so people developed models that perform better for specific tasks.

    The whole "convolution" in convolutional neural networks is essentially based on the fact that we're lazy and want to exploit spatial relationships in images. This is a huge deal because we can then group small patches of pixels and effectively "downsample" the image while training multiple instances of small detectors with those patches. Then a CNN just moves those filters around the entire image in a convolution. The outputs are then collected in a pooling layer. The pooling layer is again a down-sampling of the previous feature map. If we have activity on an output for filter a, we don't necessarily care if the activity is for (x, y) or (x+1, y), (x, y+1) or (x+1, y+1), so long as we have activity. So we often just take the highest value of activity on a small grid in the feature map called max pooling.

    If you think about it from an abstract perspective, the convolution part of a CNN is effectively doing a reasonable way of dimensionality reduction. After a while you can flatten the image and process it through layers in a dense network. Remember to use dropout layers! (because our guys wrote that paper :P)

    I won't talk about RNN for now because I don't have enough experience working with them and according to my colleague nobody really knows what's going on inside an LSTM...but they're insanely popular for data with time dependencies.


    Let's talk RNN. Recurrent networks are basically neural networks that evolve through time. Rather than exploiting spatial locality, they exploit sequential, or temporal locality. Each iteration in an RNN takes an input and it's previous hidden state, and produces some new hidden state. The weights are shared in each level, but we can unroll an RNN through time and get your everyday neural net. Theoretically RNN has the capacity to store information from as long ago as possible, but historically people always had problems with the gradients vanishing as we go back further in time, meaning that the model can't be differentiated numerically and thus cannot be trained with backprop. This was later solved in the proposal of the LSTM architecture and subsequent work, and now we train RNNs with BPTT (backpropagation through time). Here's a link that explains LSTMs really well:

    Since then RNN has been applied in many areas of AI, and many are now designing RNN with the ability to extract specific information (read: features) from its training examples with attention-based models.

    https://www.zhihu.com/question/34681168

  • 相关阅读:
    document.compatMode的CSS1compat
    js自运行函数
    Sublime Text 3 入门(插件控制台安装)
    javascript 面向对象技术
    jQuery ui 中文日历
    js给数字加三位一逗号间隔的两种方法(面试题)
    android eclipse集成环境
    中科红旗倒下,谁来挑战windows
    在网站制作中随时可用的10个 HTML5 代码片段
    IE6/IE7中li底部4px的Bug
  • 原文地址:https://www.cnblogs.com/Leo_wl/p/7090788.html
Copyright © 2011-2022 走看看