zoukankan      html  css  js  c++  java
  • Pytorch学习笔记04----LSTM模型理解及入门使用

    1.Pytorch中的LSTM模型参数说明

    class torch.nn.LSTM(*args, **kwargs)

    Pytorch官方文档中参数说明:

    Args:
            input_size: The number of expected features in the input `x`
            hidden_size: The number of features in the hidden state `h`
            num_layers: Number of recurrent layers. E.g., setting ``num_layers=2``
                would mean stacking two LSTMs together to form a `stacked LSTM`,
                with the second LSTM taking in outputs of the first LSTM and
                computing the final results. Default: 1
            bias: If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`.
                Default: ``True``
            batch_first: If ``True``, then the input and output tensors are provided
                as (batch, seq, feature). Default: ``False``
            dropout: If non-zero, introduces a `Dropout` layer on the outputs of each
                LSTM layer except the last layer, with dropout probability equal to
                :attr:`dropout`. Default: 0
            bidirectional: If ``True``, becomes a bidirectional LSTM. Default: ``False``
    
        Inputs: input, (h_0, c_0)
            - **input** of shape `(seq_len, batch, input_size)`: tensor containing the features
              of the input sequence.
              The input can also be a packed variable length sequence.
              See :func:`torch.nn.utils.rnn.pack_padded_sequence` or
              :func:`torch.nn.utils.rnn.pack_sequence` for details.
            - **h_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
              containing the initial hidden state for each element in the batch.
              If the LSTM is bidirectional, num_directions should be 2, else it should be 1.
            - **c_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
              containing the initial cell state for each element in the batch.
    
              If `(h_0, c_0)` is not provided, both **h_0** and **c_0** default to zero.
    
    
        Outputs: output, (h_n, c_n)
            - **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor
              containing the output features `(h_t)` from the last layer of the LSTM,
              for each `t`. If a :class:`torch.nn.utils.rnn.PackedSequence` has been
              given as the input, the output will also be a packed sequence.
    
              For the unpacked case, the directions can be separated
              using ``output.view(seq_len, batch, num_directions, hidden_size)``,
              with forward and backward being direction `0` and `1` respectively.
              Similarly, the directions can be separated in the packed case.
            - **h_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
              containing the hidden state for `t = seq_len`.
    
              Like *output*, the layers can be separated using
              ``h_n.view(num_layers, num_directions, batch, hidden_size)`` and similarly for *c_n*.
            - **c_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
              containing the cell state for `t = seq_len`.

    参数列表:

    • input_size:x的特征维度,自然语言处理中表示词向量的特征维度(100维、200维、300维)
    • hidden_size:隐藏层的特征维度
    • num_layers:lstm隐层的层数,默认为1
    • bias:False则bih=0和bhh=0. 默认为True
    • batch_first:True则输入输出的数据格式为 (batch, seq, feature)
    • dropout:除最后一层,每一层的输出都进行dropout,默认为: 0
    • bidirectional:True则为双向lstm默认为False
    • 输入:input, (h0, c0)
    • 输出:output, (hn,cn)

    输入数据格式:
    input    (seq_len, batch, input_size)
    h0        (num_layers * num_directions, batch, hidden_size)
    c0        (num_layers * num_directions, batch, hidden_size)

    输出数据格式:
    output  (seq_len, batch, hidden_size * num_directions)
    hn        (num_layers * num_directions, batch, hidden_size)
    cn        (num_layers * num_directions, batch, hidden_size)

    Pytorch里的LSTM单元接受的输入都必须是3维的张量(Tensors).每一维代表的意思不能弄错。

    第一维度体现的是batch_size,也就是一次性喂给网络多少条句子,或者股票数据中的,一次性喂给模型多少个时间单位的数据,具体到每个时刻,也就是一次性喂给特定时刻处理的单元的单词数或者该时刻应该喂给的股票数据的条数。上图中10表示一次性喂给模型10个句子。

    第二维体现的是序列(sequence)结构,也就是序列的个数,用文章来说,就是每个句子的长度,因为是喂给网络模型,一般都设定为确定的长度,也就是我们喂给LSTM神经元的每个句子的长度,当然,如果是其他的带有带有序列形式的数据,则表示一个明确分割单位长度。上图中40表示10个句子的统一长度均为40个单词。

    例如是如果是股票数据内,这表示特定时间单位内,有多少条数据。这个参数也就是明确这个层中有多少个确定的单元来处理输入的数据

    第三维度体现的是输入的元素(elements of input),也就是,每个具体的单词用多少维向量来表示,或者股票数据中 每一个具体的时刻的采集多少具体的值,比如最低价,最高价,均价,5日均价,10均价,等等。上图中100表示每个单词的词向量是100维的。

    H0-Hn是什么意思呢?就是每个时刻中间神经元应该保存的这一时刻的根据输入和上一课的时候的中间状态值应该产生的本时刻的状态值,

    这个数据单元是起的作用就是记录这一时刻之前考虑到所有之前输入的状态值,形状应该是和特定时刻的输出一致

    c0-cn就是开关,决定每个神经元的隐藏状态值是否会影响的下一时刻的神经元的处理,形状应该和h0-hn一致。

    当然如果是双向,和多隐藏层还应该考虑方向和隐藏层的层数。

    参考文献:https://zhuanlan.zhihu.com/p/41261640

    https://www.zhihu.com/question/41949741/answer/318771336

  • 相关阅读:
    MongoDB的复合唯一索引
    MongoDB实战读书笔记(二):面向文档的数据
    MongoDB实战读书笔记(一):JavaScript shell操作MongoDB
    JAVA8的java.util.function包
    (转)mongoDB 禁用大内存页面 transparent_hugepage=never
    (转)Centos5.5安装MONO2.10.8和Jexus 5.0开启Linux平台.net应用新篇章
    (转)IIS7 优化-网站请发并发数
    (转)IIS设置优化(需根据服务器性能,调整具体参数值)
    逻辑架构和物理架构(转)
    (转)国内外三个不同领域巨头分享的Redis实战经验及使用场景
  • 原文地址:https://www.cnblogs.com/luckyplj/p/13370072.html
Copyright © 2011-2022 走看看