tf.keras.Sequential
groups a linear stack Models into Sequential
Here are quick starts for the beginners and experts respectively: MINIST Recongnization
So, the key point of the programming is about how to build the Model. Here, we build tf.keras model using subclass tf.keras.Sequential.
Then,we talk about how to use Sequential model:
A Sequential
model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.
Schematically, the following Sequential
model:
# Define Sequential model with 3 layers model = keras.Sequential( [ layers.Dense(2, activation="relu", name="layer1"), layers.Dense(3, activation="relu", name="layer2"), layers.Dense(4, name="layer3"), ] ) # Call model on a test input x = tf.ones((3, 3)) y = model(x)
Dense
implements the operation:output = activation(dot(input, kernel) + bias)
whereactivation
is the element-wise activation function passed as theactivation
argument,kernel
is a weights matrix created by the layer, andbias
is a bias vector created by the layer (only applicable ifuse_bias
isTrue
).
Arguments | |
---|---|
units |
Positive integer, dimensionality of the output space. |
activation |
Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x ). |
use_bias |
Boolean, whether the layer uses a bias vector. |
kernel_initializer |
Initializer for the kernel weights matrix. |
bias_initializer |
Initializer for the bias vector. |
kernel_regularizer |
Regularizer function applied to the kernel weights matrix. |
bias_regularizer |
Regularizer function applied to the bias vector. |
activity_regularizer |
Regularizer function applied to the output of the layer (its "activation"). |
kernel_constraint |
Constraint function applied to the kernel weights matrix. |
bias_constraint |
Constraint function applied to the bias vector. |
Input shape:
N-D tensor with shape: (batch_size, ..., input_dim)
. The most common situation would be a 2D input with shape (batch_size, input_dim)
.
Output shape:
N-D tensor with shape: (batch_size, ..., units)
. For instance, for a 2D input with shape (batch_size, input_dim)
, the output would have shape (batch_size, units)
.