zoukankan      html  css  js  c++  java
  • LSTM 神经网络输入输出层

    今天终于弄明白,TensorFlow和Keras中LSTM神经网络的输入输出层到底应该怎么设置和连接了。写个备忘。

    https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/

    Stacked LSTM

    Multiple hidden LSTM layers can be stacked one on top of another in what is referred to as a Stacked LSTM model.
    An LSTM layer requires a three-dimensional input and LSTMs by default will produce a two-dimensional output as an interpretation from the end of the sequence.
    We can address this by having the LSTM output a value for each time step in the input data by setting the return_sequences=True argument on the layer. This allows us to have 3D output from hidden LSTM layer as input to the next.
    We can, therefore, define a Stacked LSTM as follows.

    # define model
    model = Sequential()
    model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(n_steps, n_features)))
    model.add(LSTM(50, activation='relu'))
    model.add(Dense(1))
    model.compile(optimizer='adam', loss='mse')
    
    X_train.shape
    (500, 40, 1)
    y_train.shape
    (500, 40, 1)
    
    from keras.models import Sequential
    from keras import layers
    from keras.optimizers import RMSprop
    
    model = Sequential()
    model.add(layers.GRU(100, input_shape=(None, X_train.shape[-1]), return_sequences=True))
    model.add(layers.Dense(1))
    model.compile(optimizer=RMSprop(), loss='mae')
    history = model.fit(X_train, y_train,steps_per_epoch=25,epochs=20)
    
    reset_graph()
    
    n_steps = 40
    n_inputs = 1
    n_neurons = 100
    
    X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
    y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
    
    num_units = [500, 200, 100]
    cells = [tf.nn.rnn_cell.GRUCell(num_units=n) for n in num_units]
    stacked_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(cells)
    rnn_outputs, states = tf.nn.dynamic_rnn(stacked_rnn_cell, X, dtype=tf.float32)
    
    # 先去掉一个维度,用一个Dense层连上,再把n_steps这个维度加回去
    # [batch_size, n_steps, n_neurons]
    # [batch_size * n_steps, n_neurons]
    # [batch_size, n_steps, n_neurons]
    
    stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
    stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
    outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
    
    loss = tf.reduce_mean(tf.square(outputs - y))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    training_op = optimizer.minimize(loss)
    
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    
    n_iterations = 5000
    batch_size = 100
    
    with tf.Session() as sess:
        init.run()
        for iteration in range(n_iterations):
            X_batch, y_batch = next_batch(batch_size, n_steps)
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
            if iteration % 100 == 0:
                mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
                print(iteration, "	MSE:", mse)
        
        X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
        y_pred = sess.run(outputs, feed_dict={X: X_new})
        
        saver.save(sess, "./my_time_series_model")
    
    • TensorFlow不同, Keras 中 LSTM 层默认只输出最后一个时间步
  • 相关阅读:
    centos 无线网卡安装,网卡rtl8188ee
    centos mysqldb 安装
    centos 安装gcc时,出错:Found 10 pre-existing rpmdb problem(s), 'yum check' output follows:
    peewee 字段属性help_text的支持问题
    mysql查看字段注释(帮助信息)指令
    centos mysql 大量数据导入时1153 错误:1153
    php在centos下的脚本没有解析的问题
    [转]mysql分布式方案-分库拆表
    [转]Mysql海量数据存储和解决方案之一—分布式DB方案
    mysql 数据库字符集的指定
  • 原文地址:https://www.cnblogs.com/yaos/p/9970793.html
Copyright © 2011-2022 走看看