zoukankan      html  css  js  c++  java
  • 课程四(Convolutional Neural Networks),第二 周(Deep convolutional models: case studies) —— 2.Programming assignments : Keras Tutorial

    Keras tutorial - the Happy House

    Welcome to the first assignment of week 2. In this assignment, you will:

    1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
    2. See how you can in a couple of hours build a deep learning algorithm.

    Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.

    In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!

    【中文翻译】

    Keras 教程-Happy House
    欢迎来到2第一次任务在此任务, :
      1、学习使用 Keras, 一个高级的神经网络 API (编程框架), 编写在 Python 中, 能够在几个低级框架 (包括 TensorFlow 和 CNTK) 的顶层上运行。
      2、看看如何在几个小时内构建一个深入的学习算法。
     
    我们为什么要用 Keras?Keras 的开发是为了让深学习的工程师能够很快地建立和试验不同的模型。正如 TensorFlow 是一个比 Python 更高层次的框架, Keras 是一个更高层次的框架, 并提供了额外的抽象。能够从想法到实现的过程中,尽可能少的延迟是找到好的模型的关键。但是, Keras 比低级框架更具限制性, 因此有一些非常复杂的模型可以在 TensorFlow 中实现, 但在 Keras 中却没有 (没有更多的困难)。据说, Keras 将为许多常见的模型工作。
    在本练习中, 您将处理 "Happy House" 问题, 下面我们将对此进行说明。我们加载解决Happy House问题!
     
    【code】
    import numpy as np
    from keras import layers
    from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
    from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
    from keras.models import Model
    from keras.preprocessing import image
    from keras.utils import layer_utils
    from keras.utils.data_utils import get_file
    from keras.applications.imagenet_utils import preprocess_input
    import pydot
    from IPython.display import SVG
    from keras.utils.vis_utils import model_to_dot
    from keras.utils import plot_model
    from kt_utils import *
    
    import keras.backend as K
    K.set_image_data_format('channels_last')
    import matplotlib.pyplot as plt
    from matplotlib.pyplot import imshow
    
    %matplotlib inline
    

      

    Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).
     

    1 - The Happy House

    For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.

     Figure 1 the Happy House

     
     

    As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.

    You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.

     

    Run the following code to normalize the dataset and learn about its shapes.

     
    【中文翻译】
    在你下一次的假期里, 你决定和五的同学一起度过一个星期。这是一个非常方便的房子, 有很多事情要做附近。但最重要的好处是, 每个人在家里的时候都保证过得快乐。因此, 任何想进入这所房子的人都必须证明他们目前的幸福状态。
    作为一个深入的学习专家, 为了确保 "快乐" 的规则得到严格的应用, 你要建立一个算法, 它使用从前门摄像头的图片来检查人是否快乐。只有当人快乐时, 门才会打开。
    你已经收集了你的朋友和你自己的照片, 由前门摄像头拍摄。数据集是 labbeled 的。
     
    【code】
    X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
    
    # Normalize image vectors
    X_train = X_train_orig/255.
    X_test = X_test_orig/255.
    
    # Reshape
    Y_train = Y_train_orig.T
    Y_test = Y_test_orig.T
    
    print ("number of training examples = " + str(X_train.shape[0]))
    print ("number of test examples = " + str(X_test.shape[0]))
    print ("X_train shape: " + str(X_train.shape))
    print ("Y_train shape: " + str(Y_train.shape))
    print ("X_test shape: " + str(X_test.shape))
    print ("Y_test shape: " + str(Y_test.shape))
    

    【result】

    number of training examples = 600
    number of test examples = 150
    X_train shape: (600, 64, 64, 3)
    Y_train shape: (600, 1)
    X_test shape: (150, 64, 64, 3)
    Y_test shape: (150, 1)
    

    Details of the "Happy" dataset:

    • Images are of shape (64,64,3)
    • Training: 600 pictures
    • Test: 150 pictures

    It is now time to solve the "Happy" Challenge.

      

    2 - Building a model in Keras

    Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.

    Here is an example of a model in Keras:

    # GRADED FUNCTION: HappyModel
    
    def HappyModel(input_shape):
        """
        Implementation of the HappyModel.
        
        Arguments:
        input_shape -- shape of the images of the dataset
    
        Returns:
        model -- a Model() instance in Keras
        """
        
        ### START CODE HERE ###
        # Feel free to use the suggested outline in the text above to get started, and run through the whole
        # exercise (including the later portions of this notebook) once. The come back also try out other
        # network architectures as well. 
        # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
        
        ###Input ###
        # Input层参数解释:
        #  (1)‘channels_first’模式下,输入形如(samples,channels,rows,cols)的4D张量
        #  (2) ‘channels_last’模式下,输入形如(samples,rows,cols,channels)的4D张量 
        X_input = Input(shape=input_shape)
    
        ###  Zero-Padding: pads the border of X_input with zeroes ###
        #  ZeroPadding2D 层参数解释:
        # padding:整数tuple,表示在要填充的轴的起始和结束处填充0的数目
        X = ZeroPadding2D(padding=(3, 3))(X_input)
    
        ###CONV -> BN -> RELU Block applied to X ###
        #Conv2D参数解释:
        #  (1)filters:卷积核的数目(即输出的维度)
        # (2)kernel_size:单个整数或由两个整数构成的list/tuple,卷积核的宽度和长度。如为单个整数,则表示在各个空间维度的相同长度。
        # (3) strides:单个整数或由两个整数构成的list/tuple,为卷积的步长。如为单个整数,则表示在各个空间维度的相同步长。
        X = Conv2D(filters=32,kernel_size=(3, 3), strides = (1, 1), name = 'conv0')(X) 
        
        ###(批)规范化BatchNormalization :该层在每个batch上将前一层的激活值重新规范化,即使得其输出数据的均值接近0,其标准差接近1###
        # BatchNormalization层参数解释:
        # axis: 整数,指定要规范化的轴,通常为特征轴(此处我理解为channels对应的轴)。
        # 例如在进行data_format="channels_first"的2D卷积后,一般会设axis=1;例如在进行data_format="channels_last"的2D卷积后,一般会设axis=3
        X = BatchNormalization(axis = 3, name = 'bn0')(X)
        
        ###Activation层###
        # Activation层 参数解释:
        # activation:将要使用的激活函数,为预定义激活函数名或一个Tensorflow/Theano的函数
        X = Activation('relu')(X)  
    
        ###MAXPOOL层 ###
        # MAXPOOL层参数解释:
        # pool_size:整数或长为2的整数tuple,代表在两个方向((vertical, horizontal))上缩小其维度,如取(2,2)将使图片在两个维度上均变为原长的一半。为整数意为两个维度值相同且为该数字。
        X = MaxPooling2D(pool_size=(2, 2), name='max_pool')(X)  # pool_size=(2, 2) 表示将使图片在rows,cols两个维度上均变为原长的一半
    
        ###Flatten层###
        # Flatten层参数解释:
        # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
        X = Flatten()(X)  #Flatten层用来将输入“压平”,即把多维的输入一维化
        
        ### Dense层 ###
        # Dense层参数解释:
        # Dense就是常用的全连接层,所实现的运算是output = activation(dot(input, kernel)+bias)。其中activation是逐元素计算的激活函数,kernel是本层的权值矩阵,bias为偏置向量,只有当use_bias=True才会添加。
        #(1) units:大于0的整数,代表该层的输出维度
        # (2)activation:激活函数,为预定义的激活函数名(参考激活函数),或逐元素(element-wise)的Theano函数。如果不指定该参数,将不会使用任何激活函数(即使用线性激活函数:a(x)=x)
        X = Dense(units=1, activation='sigmoid', name='fc')(X)   # 1 指代表该层的输出维度
    
        # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
        model = Model(inputs = X_input, outputs = X, name='HappyModel')
        
        ### END CODE HERE ###
        
        return model
    

    Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as XZ1A1Z2A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).

    Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D()GlobalMaxPooling2D()Dropout().

    Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.

      

    【中文翻译】

    2-在 Keras建立一个模型
    Keras 非常适合快速成型 。在短短的时间内, 你将能够建立一个模型, 取得优异的成绩。
    下面 Keras 一个模型示例:
    代码见英文部分

    请注意, Keras 使用的是变量名称有不同约定, 不像以前使用过的 numpy 和 TensorFlow。特别是, 而不是在向前传播的每一步创建和分配一个新的变量, 为不同的层的计算如 x, Z1, A1, Z2, A2 等, 在 Keras 代码中的每行只是重新 给X一个新的值,使用 X =...。换言之, 在向前传播的每一步中, 我们只是将 计算得到的最新值写入相同的变量 x。唯一的例外是 X_input, 我们保持分开并且没有覆盖, 因为我们需要它在最后创造 Keras 模型例子 (model = Model(inputs = X_input, ...) above)。  

    练习: 实施 HappyModel ()。这项任务比大多数工作都更开放。我们建议您首先使用我们建议的体系结构来实现一个模型, 并使用它作为您的初始模型来完成余下的任务。但之后, 回来, 并采取主动尝试其他模型架构。例如, 您可能会从上面的模型中获得灵感, 但随后会更改网络体系结构和参数。您还可以使用其他函数, 如 AveragePooling2D()GlobalMaxPooling2D()Dropout()
     
     注意: 你必须小心你的数据形状。使用您在视频中所学到的内容, 确保卷积、池化和全连接层适应您所应用的图形。
     
     【code】
    # GRADED FUNCTION: HappyModel
    
    def HappyModel(input_shape):
        """
        Implementation of the HappyModel.
        
        Arguments:
        input_shape -- shape of the images of the dataset
    
        Returns:
        model -- a Model() instance in Keras
        """
        
        ### START CODE HERE ###
        # Feel free to use the suggested outline in the text above to get started, and run through the whole
        # exercise (including the later portions of this notebook) once. The come back also try out other
        # network architectures as well. 
        # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
        
        X_input = Input(input_shape)
    
        X = Conv2D(filters=32, kernel_size=(3, 3), strides = (1, 1), padding='same', name = 'conv0')(X_input)
        X = BatchNormalization(axis = 3, name = 'bn0')(X)
        X = Activation('relu')(X) 
    
        X = MaxPooling2D(pool_size=(2, 2), name='max_pool0')(X)
        
        X = Flatten()(X)
        X = Dense(units=1, activation='sigmoid', name='fc')(X)
         
        # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
        model = Model(inputs = X_input, outputs = X, name='HappyModel')
        
        ### END CODE HERE ###
        
        return model
    

      

    You have now built a function to describe your model. To train and test this model, there are four steps in Keras:

    1. Create the model by calling the function above
    2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
    3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
    4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

    If you want to know more about model.compile()model.fit()model.evaluate() and their arguments, refer to the official Keras documentation.

    Exercise: Implement step 1, i.e. create the model.

    【code】

    ### START CODE HERE ### (1 line)
    happyModel = HappyModel((64, 64, 3))
    ### END CODE HERE ###
    

    Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.  

    【code】

    ### START CODE HERE ### (1 line)
    model.compile(optimizer = "sgd", loss = "binary_crossentropy", metrics = ["accuracy"]) #
    ### END CODE HERE ###
    

      

    Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.   

    【code】

    ### START CODE HERE ### (1 line)
    model.fit(x = X_train, y = Y_train, epochs = 20, batch_size = 16)
    ### END CODE HERE ###
    

    【result】

    Epoch 1/20
    600/600 [==============================] - 10s - loss: 1.7182 - acc: 0.6833    
    Epoch 2/20
    600/600 [==============================] - 9s - loss: 0.1429 - acc: 0.9450     
    Epoch 3/20
    600/600 [==============================] - 10s - loss: 0.1196 - acc: 0.9533    
    Epoch 4/20
    600/600 [==============================] - 10s - loss: 0.0764 - acc: 0.9750    
    Epoch 5/20
    600/600 [==============================] - 10s - loss: 0.0666 - acc: 0.9833    
    Epoch 6/20
    600/600 [==============================] - 10s - loss: 0.0672 - acc: 0.9750    
    Epoch 7/20
    600/600 [==============================] - 10s - loss: 0.0634 - acc: 0.9817    
    Epoch 8/20
    600/600 [==============================] - 10s - loss: 0.0391 - acc: 0.9867    
    Epoch 9/20
    600/600 [==============================] - 10s - loss: 0.0464 - acc: 0.9883    
    Epoch 10/20
    600/600 [==============================] - 10s - loss: 0.0749 - acc: 0.9700    
    Epoch 11/20
    600/600 [==============================] - 10s - loss: 0.0410 - acc: 0.9867    
    Epoch 12/20
    600/600 [==============================] - 10s - loss: 0.0640 - acc: 0.9783    
    Epoch 13/20
    600/600 [==============================] - 10s - loss: 0.0571 - acc: 0.9867    
    Epoch 14/20
    600/600 [==============================] - 10s - loss: 0.0475 - acc: 0.9883    
    Epoch 15/20
    600/600 [==============================] - 10s - loss: 0.0307 - acc: 0.9900    
    Epoch 16/20
    600/600 [==============================] - 10s - loss: 0.0198 - acc: 0.9900    
    Epoch 17/20
    600/600 [==============================] - 10s - loss: 0.0411 - acc: 0.9867    
    Epoch 18/20
    600/600 [==============================] - 10s - loss: 0.0240 - acc: 0.9883    
    Epoch 19/20
    600/600 [==============================] - 10s - loss: 0.0262 - acc: 0.9917    
    Epoch 20/20
    600/600 [==============================] - 10s - loss: 0.0247 - acc: 0.9933    
    
    Out[30]:
    <keras.callbacks.History at 0x7f540de4ef60>

      

    Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.

    Exercise: Implement step 4, i.e. test/evaluate the model.

    【code】

    ### START CODE HERE ### (1 line)
    preds = model.evaluate(x = X_test, y = Y_test)
    ### END CODE HERE ###
    print()
    print ("Loss = " + str(preds[0]))
    print ("Test Accuracy = " + str(preds[1]))
    

    【resutl】

    150/150 [==============================] - 1s     
    
    Loss = 0.167431052128
    Test Accuracy = 0.94666667064

    If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.

    To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.

    【中文翻译】

    如果你的 happyModel () 函数有效, 在训练和测试组上,你应该得到比随机猜测 (50%) 的准确率更高的结果。
    为了给您一个比较, 我们的模型在mini batch大小的16, "adam" 优化器,40次迭代时,得到大约95% 测试准确性和99% 训练准确性。但我们的模型在2-5迭代周期后得到了较好的准确性后, 所以如果你比较不同的模型, 你也可以训练各种模型在短短几个周期, 看看他们的效果。

    If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:

    • Try using blocks of CONV->BATCHNORM->RELU such as:
      X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
      X = BatchNormalization(axis = 3, name = 'bn0')(X)
      X = Activation('relu')(X)
      
      until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
    • You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
    • Change your optimizer. We find Adam works well.
    • If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
    • Run on more epochs, until you see the train accuracy plateauing.

    Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.

    Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
     
    【中文翻译】
    如果没有达到一个非常精确度 (比方说超过 80%), 下面一些可以东西尝试实现:

    • 尝试使用CONV->BATCHNORM->RELU:
    X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)
    

    直到你的高度和宽度尺寸是相当低的, 你的通道数量相当大 (例如≈32)。您正在将图片中有用的信息编码到具有大量通道volume中。然后可以展开该volume并使用全连接层。

    • 您可以在这些块之后使用 MAXPOOL。它将帮助您降低图片的高度和宽度。
    • 更改优化程序。我们发现Adam工作得很好。
    • 如果模型运行的慢得到内存问题, 降低的 batch_size (12 通常一个很好妥协)
    • 迭代更多的周期, 直到你看到训练精度停滞不前。
    即使你已经达到了一个很好的准确性, 请自由的改进您的模型, 尝试获得更好的结果。

    注意: 如果在模型上执行 超参数调整, 则测试集实际上将成为一个开发集, 并且您的模型可能在测试 (开发) 集过拟合。但就这项任务而言, 我们不会为此担心。
     
     

    3 - Conclusion

    Congratulations, you have solved the Happy House challenge!

    Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.

     

    What we would like you to remember from this assignment:

    • Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
    • Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
     

    4 - Test with your own image (Optional)

    Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:

    1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
    2. Add your image to this Jupyter Notebook's directory, in the "images" folder
    3. Write your image's name in the following code
    4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!
    

    The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!

     
    测试一:测试笑的图片
    【code】
    ### START CODE HERE ###
    img_path = 'images/happy_myself.jpg'
    ### END CODE HERE ###
    img = image.load_img(img_path, target_size=(64, 64))
    imshow(img)
    
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    
    print(happyModel.predict(x))
    

    【result】

    测试二:测试不开心的图片

    【code】

    ### START CODE HERE ###
    img_path = 'images/my_image.jpg'
    ### END CODE HERE ###
    img = image.load_img(img_path, target_size=(64, 64))
    imshow(img)

    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)

    print(happyModel.predict(x))

    【result】

    测试三:测试哭的图片 (结果不准确,应该是训练集里没有这类的图片,毕竟进happyHouse,没有人在摄像头前是大声哭着的)

    【code】

    ### START CODE HERE ###
    img_path = 'images/cry_myself.jpg'
    ### END CODE HERE ###
    img = image.load_img(img_path, target_size=(64, 64))
    imshow(img)
    
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    
    print(happyModel.predict(x))
    

    【result】  

    测试四:测试开心到哭的图片(这是一个比较极端的测试,结果也算是对了吧)

    【code】

    ### START CODE HERE ###
    img_path = 'images/cry_or_happy.jpg'
    ### END CODE HERE ###
    img = image.load_img(img_path, target_size=(64, 64))
    imshow(img)
    
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    
    print(happyModel.predict(x))
    

    【result】

     

    5 - Other useful functions in Keras (Optional)

    Two other basic features of Keras that you'll find useful are:

    • model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
    • plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.

    Run the following code.

    【code】

    happyModel.summary()
    

    【result】

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_7 (InputLayer)         (None, 64, 64, 3)         0         
    _________________________________________________________________
    conv0 (Conv2D)               (None, 64, 64, 32)        896       
    _________________________________________________________________
    bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
    _________________________________________________________________
    activation_25 (Activation)   (None, 64, 64, 32)        0         
    _________________________________________________________________
    max_pool0 (MaxPooling2D)     (None, 32, 32, 32)        0         
    _________________________________________________________________
    flatten_5 (Flatten)          (None, 32768)             0         
    _________________________________________________________________
    fc (Dense)                   (None, 1)                 32769     
    =================================================================
    Total params: 33,793
    Trainable params: 33,729
    Non-trainable params: 64
    _________________________________________________________________

    【code】 

    plot_model(happyModel, to_file='HappyModel.png')
    SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))

    【result】

    -------------------------------------------------------------------
    参考:
      1、https://hub.coursera-notebooks.org/

     

  • 相关阅读:
    「from CommonAnts」寻找 LCM
    P3380 二逼平衡树 [树状数组套可持久化主席树]
    [模板]二次剩余(无讲解)
    [校内训练19_09_10]sort
    [校内训练19_09_06]排序
    [校内训练19_09_06]直径
    [校内训练19_09_05]ca
    [校内训练19_09_02]不同的缩写
    [校内训练19_09_03]c Huge Counting
    [校内训练19_09_02]C
  • 原文地址:https://www.cnblogs.com/hezhiyao/p/8402067.html
Copyright © 2011-2022 走看看