zoukankan      html  css  js  c++  java
  • Tensorflow Keras tutotrials01

    tf.keras.Sequential

    groups a linear stack Models into Sequential

    Here are  quick starts for the beginners and experts respectively: MINIST Recongnization

    So, the key point of the programming is about how to build the Model. Here, we build tf.keras model using subclass tf.keras.Sequential.

    Then,we talk about how to use Sequential model:

    Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.

    Schematically, the following Sequential model:

    # Define Sequential model with 3 layers
    model = keras.Sequential(
        [
            layers.Dense(2, activation="relu", name="layer1"),
            layers.Dense(3, activation="relu", name="layer2"),
            layers.Dense(4, name="layer3"),
        ]
    )
    # Call model on a test input
    x = tf.ones((3, 3))
    y = model(x)

    Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).

    units Positive integer, dimensionality of the output space.
    activation Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
    use_bias Boolean, whether the layer uses a bias vector.
    kernel_initializer Initializer for the kernel weights matrix.
    bias_initializer Initializer for the bias vector.
    kernel_regularizer Regularizer function applied to the kernel weights matrix.
    bias_regularizer Regularizer function applied to the bias vector.
    activity_regularizer Regularizer function applied to the output of the layer (its "activation").
    kernel_constraint Constraint function applied to the kernel weights matrix.
    bias_constraint Constraint function applied to the bias vector.

    Input shape:

    N-D tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).

    Output shape:

    N-D tensor with shape: (batch_size, ..., units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).

    *relationship between tensorflow and keras

  • 相关阅读:
    JStack分析cpu消耗过高问题
    Machine Learning in Action – PCA和SVD
    Machine Learning in Action -- FP-growth
    Machine Learning in Action -- 树回归
    Machine Learning in Action -- 回归
    Kafka 0.8 配置参数解析
    统计学习方法笔记 -- 隐马尔可夫模型
    Machine Learning in Action -- AdaBoost
    统计学习方法笔记 -- Boosting方法
    Andrew Ng机器学习公开课笔记–Reinforcement Learning and Control
  • 原文地址:https://www.cnblogs.com/yuelien/p/14443154.html
Copyright © 2011-2022 走看看