zoukankan      html  css  js  c++  java
  • Feature extraction using convolution

    Fully Connected Networks

    In the sparse autoencoder, one design choice that we had made was to "fully connect" all the hidden units to all the input units. On the relatively small images that we were working with (e.g., 8x8 patches for the sparse autoencoder assignment, 28x28 images for the MNIST dataset), it was computationally feasible to learn features on the entire image. However, with larger images (e.g., 96x96 images) learning features that span the entire image (fully connected networks) is very computationally expensive--you would have about 104 input units, and assuming you want to learn 100 features, you would have on the order of 106 parameters to learn. The feedforward and backpropagation computations would also be about 102 times slower, compared to 28x28 images.

    对高分辨率图像,输入结点和隐藏结点全连通会带来非常大的计算量

     

    Locally Connected Networks

    One simple solution to this problem is to restrict the connections between the hidden units and the input units, allowing each hidden unit to connect to only a small subset of the input units. Specifically, each hidden unit will connect to only a small contiguous region of pixels in the input.

    This idea of having locally connected networks also draws inspiration from how the early visual system is wired up in biology. Specifically, neurons in the visual cortex have localized receptive fields (i.e., they respond only to stimuli in a certain location).

    局部连接 , 生物学依据

    Convolutions

    Natural images have the property of being stationary, meaning that the statistics of one part of the image are the same as any other part. This suggests that the features that we learn at one part of the image can also be applied to other parts of the image, and we can use the same features at all locations.

    More precisely, having learned features over small (say 8x8) patches sampled randomly from the larger image, we can then apply this learned 8x8 feature detector anywhere in the image. Specifically, we can take the learned 8x8 features andconvolve them with the larger image, thus obtaining a different feature activation value at each location in the image.

    To give a concrete example, suppose you have learned features on 8x8 patches sampled from a 96x96 image. Suppose further this was done with an autoencoder that has 100 hidden units. To get the convolved features, for every 8x8 region of the 96x96 image, that is, the 8x8 regions starting at (1, 1), (1, 2), ldots (89, 89), you would extract the 8x8 patch, and run it through your trained sparse autoencoder to get the feature activations. This would result in 100 sets 89x89 convolved features.

    Convolution schematic.gif

    Formally, given some large r 	imes c images xlarge, we first train a sparse autoencoder on small a 	imes b patches xsmall sampled from these images, learning k features f = σ(W(1)xsmall + b(1)) (where σ is the sigmoid function), given by the weights W(1)and biases b(1) from the visible units to the hidden units. For every a 	imes b patch xs in the large image, we compute fs = σ(W(1)xs + b(1)), giving us fconvolved, a k 	imes (r - a + 1) 	imes (c - b + 1) array of convolved features.

  • 相关阅读:
    在MYSQL中插入当前时间,就象SQLSERVER的GETDATE()一样,以及对mysql中的时间日期操作。
    mysql使用default来设置字段的默认值
    MySQL关于check约束无效的解决办法
    C# 加密算法汇总
    REST,Web 服务,REST-ful 服务
    C# HTML 生成 PDF
    SQL Server Connection Pooling (ADO.NET)
    ADO.NET Connection Pooling at a Glance
    正则表达式
    开发 ASP.NET vNext 初步总结(使用Visual Studio 2015 Preview )
  • 原文地址:https://www.cnblogs.com/sprint1989/p/3981615.html
Copyright © 2011-2022 走看看