zoukankan      html  css  js  c++  java
  • 资料:mnist.pkl.gz数据包的下载以及数据内容解释

      deeplearning.net/data/mnist/mnist.pkl.gz

      The MNIST dataset consists of handwritten digit images and it is divided in 60,000 examples
    for the training set and 10,000 examples for testing. In many papers as well as in this tutorial,
    the official training set of 60,000 is divided into an actual training set of 50,000 examples and
    10,000 validation examples (for selecting hyper-parameters like learning rate and size of the
    model). All digit images have been size-normalized and centered in a fixed size image of 28 x
    28 pixels. In the original dataset each pixel of the image is represented by a value between 0
    and 255, where 0 is black, 255 is white and anything in between is a different shade of grey.
      Here are some examples of MNIST digits:

         
      For convenience we pickled the dataset to make it easier to use in python. It is available for
    download here. The pickled file represents a tuple of 3 lists : the training set, the validation
    set and the testing set. Each of the three lists is a pair formed from a list of images and a list
    of class labels for each of the images. An image is represented as numpy 1-dimensional array

     of 784 (28 x 28) float values between 0 and 1 (0 stands for black, 1 for white). The labels are
    numbers between 0 and 9 indicating which digit the image represents. The code block below
    shows how to load the dataset.

    import cPickle, gzip, numpy
    # Load the dataset
    f = gzip.open(’mnist.pkl.gz’, ’rb’)
    train_set, valid_set, test_set = cPickle.load(f)
    f.close()
    

       When using the dataset, we usually divide it in minibatches (see Stochastic Gradient Descent).
    We encourage you to store the dataset into shared variables and access it based on the minibatch
    index, given a fixed and known batch size. The reason behind shared variables is related to
    using the GPU. There is a large overhead when copying data into the GPU memory. If you
    would copy data on request ( each minibatch individually when needed) as the code will do if
    you do not use shared variables, due to this overhead, the GPU code will not be much faster
    then the CPU code (maybe even slower). If you have your data in Theano shared variables
    though, you give Theano the possibility to copy the entire data on the GPU in a single call
    when the shared variables are constructed. Afterwards the GPU can access any minibatch by
    taking a slice from this shared variables, without needing to copy any information from the
    CPU memory and therefore bypassing the overhead. Because the datapoints and their labels
    are usually of different nature (labels are usually integers while datapoints are real numbers) we
    suggest to use different variables for labes and data. Also we recomand using different variables
    for the training set, validation set and testing set to make the code more readable (resulting in 6
    different shared variables).
      Since now the data is in one variable, and a minibatch is defined as a slice of that variable,
    it comes more natural to define a minibatch by indicating its index and its size. In our setup
    the batch size stays constant through out the execution of the code, therefore a function will
    actually require only the index to identify on which datapoints to work. The code below shows
    how to store your data and how to access a minibatch:

    def shared_dataset(data_xy):
      """ Function that loads the dataset into shared variables
      The reason we store our dataset in shared variables is to allow
      Theano to copy it into the GPU memory (when code is run on GPU).
      Since copying data into the GPU is slow, copying a minibatch everytime
      is needed (the default behaviour if the data is not in a shared
      variable) would lead to a large decrease in performance.
      """
      data_x, data_y = data_xy
      shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
      shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
      # When storing data on the GPU it has to be stored as floats
      # therefore we will store the labels as ‘‘floatX‘‘ as well
      # (‘‘shared_y‘‘ does exactly that). But during our computations
      # we need them as ints (we use labels as index, and if they are
      # floats it doesn’t make sense) therefore instead of returning
      # ‘‘shared_y‘‘ we will have to cast it to int. This little hack
      # lets us get around this issue
      return shared_x, T.cast(shared_y, ’int32’)
    test_set_x, test_set_y = shared_dataset(test_set) valid_set_x, valid_set_y = shared_dataset(valid_set) train_set_x, train_set_y = shared_dataset(train_set)
    batch_size = 500 # size of the minibatch
    # accessing the third minibatch of the training set
    data = train_set_x[2 * 500: 3 * 500] label = train_set_y[2 * 500: 3 * 500]

       The data has to be stored as floats on the GPU ( the right dtype for storing on the GPU is given by
    theano.config.floatX). To get around this shortcomming for the labels, we store them as float, and
    then cast it to int.

    ----------------------------------------------------------------------------------------------------------------
    Note: If you are running your code on the GPU and the dataset you are using is too large to fit in memory
    the code will crash. In such a case you should store the data in a shared variable. You can however store a
    sufficiently small chunk of your data (several minibatches) in a shared variable and use that during training.
    Once you got through the chunk, update the values it stores. This way you minimize the number of data
    transfers between CPU memory and GPU memory.

    ----------------------------------------------------------------------------------------------------------------

    浅闻陋见,还望指正
  • 相关阅读:
    td中内容自动换行
    PHP计算两个时间的年数、月数以及天数
    phpexcel常用操作
    php实现将人民币金额转大写的办法
    解决 PHPExcel 长数字串显示为科学计数
    phpexcel单元格内换行
    phpexcel设置所有单元格的默认对齐方式
    {dede:sql}标签的用法
    PHP 文件上传
    Dedecms 数据库结构分析
  • 原文地址:https://www.cnblogs.com/upright/p/4191757.html
Copyright © 2011-2022 走看看